WorldWideScience

Sample records for analysis of variance

  1. Naive Analysis of Variance

    Science.gov (United States)

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  2. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  3. One-way analysis of variance with unequal variances.

    OpenAIRE

    Rice, W R; Gaines, S. D.

    1989-01-01

    We have designed a statistical test that eliminates the assumption of equal group variances from one-way analysis of variance. This test is preferable to the standard technique of trial-and-error transformation and can be shown to be an extension of the Behrens-Fisher T test to the case of three or more means. We suggest that this procedure be used in most applications where the one-way analysis of variance has traditionally been applied to biological data.

  4. Analysis of Variance: Variably Complex

    Science.gov (United States)

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  5. An alternative analysis of variance

    OpenAIRE

    Longford, Nicholas T.

    2008-01-01

    The one-way analysis of variance is a staple of elementary statistics courses. The hypothesis test of homogeneity of the means encourages the use of the selected-model based estimators which are usually assessed without any regard for the uncertainty about the outcome of the test. We expose the weaknesses of such estimators when the uncertainty is taken into account, as it should be, and propose synthetic estimators as an alternative.

  6. Generalized analysis of molecular variance.

    Directory of Open Access Journals (Sweden)

    Caroline M Nievergelt

    2007-04-01

    Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

  7. Analysis of variance for model output

    NARCIS (Netherlands)

    Jansen, M.J.W.

    1999-01-01

    A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va

  8. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  9. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  10. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  11. Uses and abuses of analysis of variance.

    OpenAIRE

    Evans, S. J.

    1983-01-01

    Analysis of variance is a term often quoted to explain the analysis of data in experiments and clinical trials. The relevance of its methodology to clinical trials is shown and an explanation of the principles of the technique is given. The assumptions necessary are examined and the problems caused by their violation are discussed. The dangers of misuse are given with some suggestions for alternative approaches.

  12. Multivariate Analysis of Variance Using Spatial Ranks

    OpenAIRE

    KYUNGMEE CHOI; JOHN MARDEN

    2002-01-01

    The authors consider multivariate analysis of variance procedures based on the multivariate spatial ranks. Two models are considered: the location-family model and the fully nonparametric model. Procedures for testing main and interaction effects are given for the 2 × 2 layout.

  13. Directional variance analysis of annual rings

    Science.gov (United States)

    Kumpulainen, P.; Marjanen, K.

    2010-07-01

    The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

  14. Analysis of variance of microarray data.

    Science.gov (United States)

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792

  15. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Science.gov (United States)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  16. The Importance of Variance Analysis for Costs Control in Organizations

    OpenAIRE

    Okoh, L. O.; Uzoka, P.

    2012-01-01

    This review aimed at examining the importance of variance analysis for cost control in organizations. The study x-rayed the concept of variance analysis, types, sources, objectives and its significance. The study reported that variance analysis has significant influence in evaluating individual performance in organizations, assignment of responsibilities to individuals and assisting management to rely on the principle of management by exception and recommended among others, variances analysis...

  17. An Approximation of the Minimum-Variance Estimator of Heritability Based on Variance Component Analysis

    OpenAIRE

    Grossman, M.; Norton, H W

    1981-01-01

    An approximate minimum-variance estimate of heritability (h2) is proposed, using the sire and dam components of variance from a hierarchical analysis of variance. The minimum sampling variance is derived for unbalanced data. Optimum structures for the estimation of h2 are given for the balanced case. The degree to which ĥ2 is more precise than the equally weighted estimate ĥ2S+D is a function of the size and structure of the sample used. However, computer simulation reveals that ĥ2 has less d...

  18. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  19. Applications of non-parametric statistics and analysis of variance on sample variances

    Science.gov (United States)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  20. Variance analysis. Part II, The use of computers.

    Science.gov (United States)

    Finkler, S A

    1991-09-01

    This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788

  1. DISCO analysis: A nonparametric extension of analysis of variance

    OpenAIRE

    RIZZO, MARIA L.; Székely, Gábor J.

    2010-01-01

    In classical analysis of variance, dispersion is measured by considering squared distances of sample elements from the sample mean. We consider a measure of dispersion for univariate or multivariate response based on all pairwise distances between-sample elements, and derive an analogous distance components (DISCO) decomposition for powers of distance in $(0,2]$. The ANOVA F statistic is obtained when the index (exponent) is 2. For each index in $(0,2)$, this decomposition determines a nonpar...

  2. Wavelet Variance Analysis of EEG Based on Window Function

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-zhuang; YOU Rong-yi

    2014-01-01

    A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.

  3. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  4. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  5. Estimation of Variance Components in the Mixed-Effects Models: A Comparison Between Analysis of Variance and Spectral Decomposition

    OpenAIRE

    Wu, Mi-Xia; Yu, Kai-Fun; Liu, Ai-Yi

    2009-01-01

    The mixed-effects models with two variance components are often used to analyze longitudinal data. For these models, we compare two approaches to estimating the variance components, the analysis of variance approach and the spectral decomposition approach. We establish a necessary and sufficient condition for the two approaches to yield identical estimates, and some sufficient conditions for the superiority of one approach over the other, under the mean squared error criterion. Applications o...

  6. Budget variance analysis using RVUs.

    Science.gov (United States)

    Berlin, M F; Budzynski, M R

    1998-01-01

    This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247

  7. Statistics review 9: One-way analysis of variance

    OpenAIRE

    Bewick, Viv; Cheek, Liz; Ball, Jonathan

    2004-01-01

    This review introduces one-way analysis of variance, which is a method of testing differences between more than two groups or treatments. Multiple comparison procedures and orthogonal contrasts are described as methods for identifying specific differences between pairs of treatments.

  8. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed to be...

  9. Intuitive Analysis of Variance-- A Formative Assessment Approach

    Science.gov (United States)

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  10. Analysis of Variance in the Modern Design of Experiments

    Science.gov (United States)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  11. Analysis of variance of thematic mapping experiment data.

    Science.gov (United States)

    Rosenfield, G.H.

    1981-01-01

    As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author

  12. Variance Analysis of Genus Ipomoea based on Morphological Characters

    Directory of Open Access Journals (Sweden)

    DWI PRIYANTO

    2000-07-01

    Full Text Available The objective of this research was to find out the variability of morphological characters of genus Ipomoea, including coefficient variance and phylogenetic relationships. Genus Ipomoea has been identified consisting of four species i.e. Ipomoea crassicaulis Rob, Ipomoea aquatica Forsk., Ipomoea reptans Poir and Ipomoea leari. Four species of the genus took from surround the lake inside the campus of Sebelas Maret University, Surakarta. Comparison of species variability was based on the variance coefficient of vegetative and generative morphological characters. The vegetative characters observed were roots, steams and leaves, while the generative characters observed were flowers, fruits, and seeds. Phylogenetic relationship was determined by clustering association coefficient. Coefficient variance analysis of vegetative and generative morphological characters resulted in several groups based on the degree of variability i.e. low, enough, high, very high or none. The phylogenetic relationship showed that Ipomoea aquatica Forsk. and Ipomoea reptans Poir. have higher degree of phylogenetic than Ipomoea leari and Ipomoea crassicaulis Rob.

  13. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  14. Analysis of variance tables based on experimental structure.

    Science.gov (United States)

    Brien, C J

    1983-03-01

    A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362

  15. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    Science.gov (United States)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  16. Analysis of variance of an underdetermined geodetic displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Darby, D.

    1982-06-01

    It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

  17. The use of analysis of variance procedures in biological studies

    Science.gov (United States)

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  18. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  19. Confidence Intervals for the Between Group Variance in the Unbalanced One-Way Random Effects Model of Analysis of Variance

    OpenAIRE

    Hartung, Joachim; Knapp, Guido

    2000-01-01

    A confidence interval for the between group variance is proposed which is deduced from Wald’s exact confidence interval for the ratio of the two variance components in the one-way random effects model and the exact confidence interval for the error variance resp. an unbiased estimator of the error variance. In a simulation study the confidence coefficients for these two intervals are compared with the confidence coefficients of two other commonly used confidence intervals. There, the confiden...

  20. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  1. Estimation of measurement variances

    International Nuclear Information System (INIS)

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  2. Variance analysis. Part I, Extending flexible budget variance analysis to acuity.

    Science.gov (United States)

    Finkler, S A

    1991-01-01

    The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002

  3. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  4. Variance Analysis of Genus Ipomoea based on Morphological Characters

    OpenAIRE

    DWI PRIYANTO; SURATMAN; AHMAD DWI SETYAWAN

    2000-01-01

    The objective of this research was to find out the variability of morphological characters of genus Ipomoea, including coefficient variance and phylogenetic relationships. Genus Ipomoea has been identified consisting of four species i.e. Ipomoea crassicaulis Rob, Ipomoea aquatica Forsk., Ipomoea reptans Poir and Ipomoea leari. Four species of the genus took from surround the lake inside the campus of Sebelas Maret University, Surakarta. Comparison of species variability was based on the varia...

  5. Things that make us different: analysis of variance in the use of time

    OpenAIRE

    Jorge González-Chapela

    2010-01-01

    The bounded character of time-use data poses a challenge to the analysis of variance based on classical linear models. This paper investigates a computationally simple variance decomposition technique suitable for these data. As a by-product of the analysis, a measure of fit for systems of time-demand equations that possesses several useful properties is proposed.

  6. Analysis and application of minimum variance discrete linear system identification

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1977-01-01

    An on-line minimum variance (MV) parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise (AMN). The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean-square convergent and mean-square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  7. Analysis and application of minimum variance discrete time system identification

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1976-01-01

    An on-line minimum variance parameter identifier was developed which embodies both accuracy and computational efficiency. The new formulation resulted in a linear estimation problem with both additive and multiplicative noise. The resulting filter is shown to utilize both the covariance of the parameter vector itself and the covariance of the error in identification. It is proven that the identification filter is mean square covergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  8. Spectral variance of aeroacoustic data

    Science.gov (United States)

    Rao, K. V.; Preisser, J. S.

    1981-01-01

    An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.

  9. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    Science.gov (United States)

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  10. Wavelet multiscale analysis of a power system load variance

    OpenAIRE

    Avdakovic, Samir; Nuhanovic, Amir; Kusljugic, Mirza

    2013-01-01

    Wavelet transform (WT) represents a very attractive mathematical area for just more than 15 years of its research in applications in electrical engineering. This is mainly due to its advantages over other processing techniques and signal analysis, which is reflected in the time-frequency analysis, and so it has an important application in the processing and analysis of time series. In this paper, for example, the analysis of the hourly load of a real power system over the past few yea...

  11. Structure analysis of interstellar clouds: II. Applying the Delta-variance method to interstellar turbulence

    CERN Document Server

    Ossenkopf, V; Stutzki, J

    2008-01-01

    The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. In paper I we proposed essential improvements to the Delta-variance analysis. In this paper we apply the improved Delta-variance analysis to i) a hydrodynamic turbulence simulation with prominent density and velocity structures, ii) an observed intensity map of rho Oph with irregular boundaries and variable uncertainties of the different data points, and iii) a map of the turbulent velocity structure in the Polaris Flare affected by the intensity dependence on the centroid velocity determination. The tests confirm the extended capabilities of the improved Delta-variance analysis. Prominent spatial scales were accurately identified and artifacts from a variable reliability of the data were removed. The analysis of the hydrodynamic simulations showed that the injection of a turbulent velocity structure creates the most prominent density structures are produced on a sca...

  12. Admissibility of Invariant Tests in the General Multivariate Analysis of Variance Problem

    OpenAIRE

    Marden, John I.

    1983-01-01

    Necessary and sufficient conditions for an invariant test to be admissible among invariant tests in the general multivariate analysis of variance problem are presented. It is shown that in many cases the popular tests based on the likelihood ratio matrix are inadmissible. Other tests are shown admissible. Numerical work suggests that the inadmissibility of the likelihood ratio test is not serious. The results are given for the multivariate analysis of variance problem as a special case.

  13. Variance heterogeneity analysis for detection of potentially interacting genetic loci: method and its limitations

    Directory of Open Access Journals (Sweden)

    van Duijn Cornelia

    2010-10-01

    Full Text Available Abstract Background Presence of interaction between a genotype and certain factor in determination of a trait's value, it is expected that the trait's variance is increased in the group of subjects having this genotype. Thus, test of heterogeneity of variances can be used as a test to screen for potentially interacting single-nucleotide polymorphisms (SNPs. In this work, we evaluated statistical properties of variance heterogeneity analysis in respect to the detection of potentially interacting SNPs in a case when an interaction variable is unknown. Results Through simulations, we investigated type I error for Bartlett's test, Bartlett's test with prior rank transformation of a trait to normality, and Levene's test for different genetic models. Additionally, we derived an analytical expression for power estimation. We showed that Bartlett's test has acceptable type I error in the case of trait following a normal distribution, whereas Levene's test kept nominal Type I error under all scenarios investigated. For the power of variance homogeneity test, we showed (as opposed to the power of direct test which uses information about known interacting factor that, given the same interaction effect, the power can vary widely depending on the non-estimable direct effect of the unobserved interacting variable. Thus, for a given interaction effect, only very wide limits of power of the variance homogeneity test can be estimated. Also we applied Levene's approach to test genome-wide homogeneity of variances of the C-reactive protein in the Rotterdam Study population (n = 5959. In this analysis, we replicate previous results of Pare and colleagues (2010 for the SNP rs12753193 (n = 21, 799. Conclusions Screening for differences in variances among genotypes of a SNP is a promising approach as a number of biologically interesting models may lead to the heterogeneity of variances. However, it should be kept in mind that the absence of variance heterogeneity for

  14. An application of the analysis of variance of measures repeated in an experiment with heavy metals

    International Nuclear Information System (INIS)

    A revision of some basic concepts related to the analysis of variance of repeated measures is presented within an ecological context topics such as the types of experiments in which the technique is applicable, the hypotheses of interest, and its preference over other traditional techniques such as regression and conventional analysis of variance, are discussed. As an example, the technique was successfully applied to an experiment carried out at Cienaga Grande de Santa Marta, Colombia, in which the concentration of cadmium μg/g in leaves of the black mangrove Avicennia germinans was measured in several monitoring stations and throughout several sampling intervals representing seasons

  15. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in...... the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an...... analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be...

  16. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  17. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    Science.gov (United States)

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  18. Gender variance on campus : a critical analysis of transgender voices

    OpenAIRE

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn, 2005; Beemyn, Curtis, Davis, & Tubbs, 2005). This study examined the perceptions of transgender inclusion, ways in which leadership structures or entiti...

  19. Variance Analysis of Unevenly Spaced Time Series Data

    Science.gov (United States)

    Hackman, Christine; Parker, Thomas E.

    1996-01-01

    We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.

  20. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    Science.gov (United States)

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  1. Analysis of variance and functional measurement a practical guide

    CERN Document Server

    Weiss, David J

    2006-01-01

    Chapter I. IntroductionChapter II. One-way ANOVAChapter III. Using the ComputerChapter IV. Factorial StructureChapter V. Two-way ANOVA Chapter VI. Multi-factor DesignsChapter VII. Error Purifying DesignsChapter VIII. Specific ComparisonsChapter IX. Measurement IssuesChapter X. Strength of Effect**Chapter XI. Nested Designs**Chapter XII. Missing Data**Chapter XIII. Confounded Designs**Chapter XIV. Introduction to Functional Measurement**Terms from Introductory Statistics References Subject Index Name Index

  2. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    Science.gov (United States)

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  3. A Demonstration of the Analysis of Variance Using Physical Movement and Space

    Science.gov (United States)

    Owen, William J.; Siakaluk, Paul D.

    2011-01-01

    Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

  4. Smoothing spline analysis of variance approach for global sensitivity analysis of computer codes

    International Nuclear Information System (INIS)

    The paper investigates a nonparametric regression method based on smoothing spline analysis of variance (ANOVA) approach to address the problem of global sensitivity analysis (GSA) of complex and computationally demanding computer codes. The two steps algorithm of this method involves an estimation procedure and a variable selection. The latter can become computationally demanding when dealing with high dimensional problems. Thus, we proposed a new algorithm based on Landweber iterations. Using the fact that the considered regression method is based on ANOVA decomposition, we introduced a new direct method for computing sensitivity indices. Numerical tests performed on several analytical examples and on an application from petroleum reservoir engineering showed that the method gives competitive results compared to a more standard Gaussian process approach

  5. Application of the analysis of variance for the determination of reinforcement structure homogeneity in MMC

    OpenAIRE

    K. Gawdzińska; S. Berczyński; M. Pelczar; J. Grabian

    2010-01-01

    These authors propose a new definition of homogeneity verified by variance analysis. The analysis aimed at quantitative variables describing the homogeneity of reinforcement structure, i.e. surface areas, reinforcement phase diameter and percentage of reinforcement area contained in a circle within a given region. The examined composite material consisting of silicon carbide reinforcement particles in AlSi11 alloy matrix was made by mechanical mixing.

  6. Variance of volume estimators

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    Jena : Friedrich-Schiller-Universität, 2007. s. 23-23. [Workshop on Stochastic Geometry, Stereology and Image Analysis /14./. 23.09.2007-28.09.2007, Neudietendorf] R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : spr2 * stereology * volume * variance Subject RIV: EA - Cell Biology

  7. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    Science.gov (United States)

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  8. A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance

    Science.gov (United States)

    Liu, Xiaofeng Steven

    2010-01-01

    The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…

  9. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists

    Science.gov (United States)

    Warne, Russell T.

    2014-01-01

    Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…

  10. Publishing Nutrition Research: A Review of Multivariate Techniques Part 2: Analysis of Variance.

    OpenAIRE

    Harris, Jeffrey E.; Sheean, Patricia M.; Philip M. Gleason; Barbara Bruemmer; Carol Boushey

    2012-01-01

    This article is the eighth in a series exploring the importance of research design, statistical analysis, and epidemiology in nutrition and dietetics research. It is the second article in that series focused on multivariate statistical analytical techniques. This review examines the statistical technique of analysis of variance (ANOVA), from its simplest form to multivariate applications. It addresses all these applications and includes hypothetical and real examples from the field of dietetics.

  11. Toward an objective evaluation of teacher performance: The use of variance partitioning analysis, VPA.

    Directory of Open Access Journals (Sweden)

    Eduardo R. Alicias

    2005-05-01

    Full Text Available Evaluation of teacher performance is usually done with the use of ratings made by students, peers, and principals or supervisors, and at times, selfratings made by the teachers themselves. The trouble with this practice is that it is obviously subjective, and vulnerable to what Glass and Martinez call the "politics of teacher evaluation," as well as to professional incapacities of the raters. The value-added analysis (VAA model is one attempt to make evaluation objective and evidenced-based. However, the VAA model'especially that of the Tennessee Value Added Assessment System (TVAAS developed by William Sanders'appears flawed essentially because it posits the untenable assumption that the gain score of students (value added is attributable only and only to the teacher(s, ignoring other significant explanators of student achievement like IQ and socio-economic status. Further, the use of the gain score (value-added as a dependent variable appears hobbled with the validity threat called "statistical regression," as well as the problem of isolating the conflated effects of two or more teachers. The proposed variance partitioning analysis (VPA model seeks to partition the total variance of the dependent variable (post-test student achievement into various portions representing: first, the effects attributable to the set of teacher factors; second, effects attributable to the set of control variables the most important of which are IQ of the student, his pretest score on that particular dependent variable, and some measures of his socio-economic status; and third, the unexplained effects/variance. It is not difficult to see that when the second and third quanta of variance are partitioned out of the total variance of the dependent variable, what remains is that attributable to the teacher. Two measures of teacher effect are hereby proposed: the proportional teacher effect and the direct teacher effect.

  12. Analysis of variance of quantitative parameters bidders offers for public procurement in the chosen sector

    OpenAIRE

    Gavorníková, Katarína

    2012-01-01

    Goal of this work was to found out which determinants and in what direction influence variance of price biddings offered by bidders for public procurement, as well as their behavior during selection process. This work aimed on public procurement for construction works declared by municipal procurement authority. Regression analysis confirmed the variable estimated price and ratio of final and estimated price of public procurement as the strongest influences. Increasing estimated price raises ...

  13. Application of Rejection Sampling based methodology to variance based parametric sensitivity analysis

    International Nuclear Information System (INIS)

    For estimating the effect of uncertain distribution parameter on the variance of failure probability function (FPF), the map from distribution parameters to FPF is built and the high efficient approximation form is extended to solve the parametric variance-based sensitivity index. Then the parametric variance-based sensitivity index can be firstly expressed as the moments of the FPF, and the FPF is approximated by a product of the univariate functions of the distribution parameters, on which the moments of the FPF approximated by the univariate functions can be easily evaluated by the Gaussian integration using the values of the FPF at the Gaussian nodes. Thus the primary task of evaluating the parametric variance-based sensitivity is transformed to calculate the FPF at Gaussian nodes of the univariate functions, for which Monte Carlo (MC), Extended Monte Carlo (EMC) and Rejection Sampling (RS) are employed and compared here. Only one set of samples of inputs are needed in either EMC or RS. Several numerical and engineering examples are presented to verify the accuracy and efficiency of the proposed approximate methods. Additionally, the results also reveal the virtue of RS which can be more accurate and more unlimited than EMC. - Highlights: • An efficient approximate form is applied for parametric sensitivity analysis. • Gaussian integration techniques are used to compute the moments of FPF. • Extended Monte Carlo is used to compute the FPF with only one set of samples. • Rejection Sampling is applied to estimate FPF by reusing the original samples

  14. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  15. Structure analysis of interstellar clouds: I. Improving the Delta-variance method

    CERN Document Server

    Ossenkopf, V; Stutzki, J

    2008-01-01

    The Delta-variance analysis, has proven to be an efficient and accurate method of characterising the power spectrum of interstellar turbulence. The implementation presently in use, however, has several shortcomings. We propose and test an improved Delta-variance algorithm for two-dimensional data sets, which is applicable to maps with variable error bars and which can be quickly computed in Fourier space. We calibrate the spatial resolution of the Delta-variance spectra. The new Delta-variance algorithm is based on an appropriate filtering of the data in Fourier space. It allows us to distinguish the influence of variable noise from the actual small-scale structure in the maps and it helps for dealing with the boundary problem in non-periodic and/or irregularly bounded maps. We try several wavelets and test their spatial sensitivity using artificial maps with well known structure sizes. It turns out that different wavelets show different strengths with respect to detecting characteristic structures and spectr...

  16. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  17. The Efficiency of Split Panel Designs in an Analysis of Variance Model.

    Science.gov (United States)

    Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  18. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  19. Contrasting regional architectures of schizophrenia and other complex diseases using fast variance components analysis

    DEFF Research Database (Denmark)

    Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; Grove, Jakob; O’Donovan, Michael C; Neale, Benjamin M; Patterson, Nick; Price, Alkes L

    2015-01-01

    Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC a...... liabilities. To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale....

  20. Structure analysis of simulated molecular clouds with the Δ-variance

    Science.gov (United States)

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-07-01

    We employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n0 = 30, 100 and 300 cm-3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth-size relation ranging from 0.4 to 0.7 for the total and H2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H2 density by a factor of 1.5-3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of ˜100 cm-3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.

  1. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis

    Science.gov (United States)

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of ...

  2. Comparison of performance between rescaled range analysis and rescaled variance analysis in detecting abrupt dynamic change

    Institute of Scientific and Technical Information of China (English)

    何文平; 刘群群; 姜允迪; 卢莹

    2015-01-01

    In the present paper, a comparison of the performance between moving cutting data-rescaled range analysis (MC-R/S) and moving cutting data-rescaled variance analysis (MC-V/S) is made. The results clearly indicate that the operating efficiency of the MC-R/S algorithm is higher than that of the MC-V/S algorithm. In our numerical test, the computer time consumed by MC-V/S is approximately 25 times that by MC-R/S for an identical window size in artificial data. Except for the difference in operating efficiency, there are no significant differences in performance between MC-R/S and MC-V/S for the abrupt dynamic change detection. MC-R/S and MC-V/S both display some degree of anti-noise ability. However, it is important to consider the influences of strong noise on the detection results of MC-R/S and MC-V/S in practical application processes.

  3. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

    Science.gov (United States)

    Peng, Yichuan; Lord, Dominique; Zou, Yajie

    2014-12-01

    As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model. PMID:25173723

  4. THE EFFECTS OF DISAGGREGATED SAVINGS ON ECONOMIC GROWTH IN MALAYSIA - GENERALISED VARIANCE DECOMPOSITION ANALYSIS

    OpenAIRE

    Chor Foon Tang; Hooi Hooi Lean

    2009-01-01

    This study examines how much of the variance in economic growth can be explained by various categories of domestic and foreign savings in Malaysia. The bounds testing approach to cointegration and the generalised forecast error variance decomposition technique was used to achieve the objective of this study. The cointegration test results demonstrate that the relationship between economic growth and savings in Malaysia are stable and coalescing in the long run. The variance decomposition find...

  5. Methods and applications of linear models regression and the analysis of variance

    CERN Document Server

    Hocking, Ronald R

    2013-01-01

    Praise for the Second Edition"An essential desktop reference book . . . it should definitely be on your bookshelf." -Technometrics A thoroughly updated book, Methods and Applications of Linear Models: Regression and the Analysis of Variance, Third Edition features innovative approaches to understanding and working with models and theory of linear regression. The Third Edition provides readers with the necessary theoretical concepts, which are presented using intuitive ideas rather than complicated proofs, to describe the inference that is appropriate for the methods being discussed. The book

  6. On Mean-Variance Analysis

    OpenAIRE

    Yang Li; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  7. Structure analysis of simulated molecular clouds with the Delta-variance

    CERN Document Server

    Bertram, Erik; Glover, Simon C O

    2015-01-01

    We employ the Delta-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n_0 = 30, 100 and 300 cm^{-3} that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Delta-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 -> 0) lines. The spectral slopes of the Delta-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth-size relation ranging from 0.4 to 0....

  8. Mean-Variance-Instability Portfolio Analysis: A Case of Taiwan's Stock Market

    OpenAIRE

    Shawin Lee; Kuo-Ping Chang

    1995-01-01

    This paper applies Talpaz, Harpaz, and Penson's (THP) (Talpaz, H., A. Harpaz, J. B. Penson, Jr. 1983. Risk and spectral instability in portfolio analysis. Eur. J. Oper. Res. 14 262--269.) mean-variance-instability portfolio selection model to eight selected Taiwan stocks during 1980--89 to demonstrate how instability preference affects the traditional mean-variance frontier. In contrast to THP's finding, the empirical results show that Taiwan's high-frequency stocks have high, not low, varian...

  9. ELLIPTICAL SYMMETRY, EXPECTED UTILITY, AND MEAN-VARIANCE ANALYSIS

    OpenAIRE

    Carl H. NELSON; Ndjeunga, Jupiter

    1997-01-01

    Mean-variance analysis in the form of risk programming has a long, productive history in agricultural economics research. And risk programming continues to be used despite well known theoretical results that choices based on mean-variance analysis are not consistent with choices based on expected utility maximization. This paper demonstrates that the multivariate distribution of returns used in risk programming must be elliptically symmetric in order for mean-variance analysis to be consisten...

  10. Analysis of variance on thickness and electrical conductivity measurements of carbon nanotube thin films

    Science.gov (United States)

    Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard

    2016-09-01

    One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.

  11. Visualization Method for Finding Critical Care Factors in Variance Analysis

    OpenAIRE

    YUI, Shuntaro; BITO, Yoshitaka; OBARA, Kiyohiro; KAMIYAMA, Takuya; SETO, Kumiko; Ban, Hideyuki; HASHIZUME, Akihide; HAGA, Masashi; Oka, Yuji

    2006-01-01

    We present a novel visualization method for finding care factors in variance analysis. The analysis has two stages: first stage enables users to extract a significant variance, and second stage enables users to find out a critical care factors of the variance. The analysis has been validated by using synthetically created inpatient care processes. It was found that the method is efficient in improving clinical pathways.

  12. Modeling the Variance of Variance Through a Constant Elasticity of Variance Generalized Autoregressive Conditional Heteroskedasticity Model

    OpenAIRE

    Saedi, Mehdi; Wolk, Jared

    2012-01-01

    This paper compares a standard GARCH model with a Constant Elasticity of Variance GARCH model across three major currency pairs and the S&P 500 index. We discuss the advantages and disadvantages of using a more sophisticated model designed to estimate the variance of variance instead of assuming it to be a linear function of the conditional variance. The current stochastic volatility and GARCH analogues rest upon this linear assumption. We are able to confirm through empirical estimation ...

  13. [Discussion of errors and measuring strategies in morphometry using analysis of variance].

    Science.gov (United States)

    Rother, P; Jahn, W; Fitzl, G; Wallmann, T; Walter, U

    1986-01-01

    Statistical techniques known as the analysis of variance make it possible for the morphologist to plan work in such a way as to get quantitative data with the greatest possible economy of effort. This paper explains how to decide how many measurements to make per micrograph, how many micrographs per tissue block or organ, and how many organs or individuals are necessary for getting an exactness of sufficient quality of the results. The examples furnished have been taken from measuring volume densities of mitochondria in heart muscle cells and from cell counting in lymph nodes. Finally we show, how to determine sample sizes, if we are interested in demonstration of significant differences between mean values. PMID:3569811

  14. Analysis of open-loop conical scan pointing error and variance estimators

    Science.gov (United States)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  15. New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces

    Science.gov (United States)

    Sidick, Erkin

    2010-01-01

    Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).

  16. Mean Variance Optimization of Non-Linear Systems and Worst-case Analysis

    OpenAIRE

    Parpas, Panos; Rustem, Berc; Wieland, Volker; Zakovic, Stan

    2006-01-01

    In this paper, we consider expected value, variance and worst-case optimization of nonlinear models. We present algorithms for computing optimal expected values, and variance, based on iterative Taylor expansions. We establish convergence and consider the relative merits of policies beaded on expected value optimization and worst-case robustness. The latter is a minimax strategy and ensures optimal cover in view of the worst-case scenario(s) while the former is optimal expected performance in...

  17. On spectral methods for variance based sensitivity analysis

    OpenAIRE

    Alexanderian, Alen

    2013-01-01

    Consider a mathematical model with a finite number of random parameters. Variance based sensitivity analysis provides a framework to characterize the contribution of the individual parameters to the total variance of the model response. We consider the spectral methods for variance based sensitivity analysis which utilize representations of square integrable random variables in a generalized polynomial chaos basis. Taking a measure theoretic point of view, we provide a rigorous and at the sam...

  18. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

    Science.gov (United States)

    Harder, R. L.

    1974-01-01

    The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

  19. Minimum variance imaging based on correlation analysis of Lamb wave signals.

    Science.gov (United States)

    Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi

    2016-08-01

    In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance. PMID:27155349

  20. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review

    CERN Document Server

    Malkin, Zinovy

    2016-01-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing of the frequency standards deviations. For the past decades, AVAR has increasingly being used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with the clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. Besides, some physically connected scalar time series naturally form series of multi-dimensional vectors. For example, three station coordinates time series $X$, $Y$, and $Z$ can be combined to analyze 3D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multi-dimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multi-dimensional AVAR (MAVAR), and weighted multi-dimensional AVAR (WMAVAR), were introduced to overcome these ...

  1. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  2. Performance of selected imputation techniques for missing variances in meta-analysis

    International Nuclear Information System (INIS)

    A common method of handling the problem of missing variances in meta-analysis of continuous response is through imputation. However, the performance of imputation techniques may be influenced by the type of model utilised. In this article, we examine through a simulation study the effects of the techniques of imputation of the missing SDs and type of models used on the overall meta-analysis estimates. The results suggest that imputation should be adopted to estimate the overall effect size, irrespective of the model used. However, the accuracy of the estimates of the corresponding standard error (SE) is influenced by the imputation techniques. For estimates based on the fixed effects model, mean imputation provides better estimates than multiple imputations, while those based on the random effects model responds more robustly to the type of imputation techniques. The results showed that although imputation is good in reducing the bias in point estimates, it is more likely to produce coverage probability which is higher than the nominal value.

  3. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    Energy Technology Data Exchange (ETDEWEB)

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  4. FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons

    Science.gov (United States)

    Fordyce, Michael W.

    1977-01-01

    A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)

  5. Analysis of Quantitative Traits in Two Long-Term Randomly Mated Soybean Populations I. Genetic Variances

    Science.gov (United States)

    The genetic effects of long term random mating and natural selection aided by genetic male sterility were evaluated in two soybean [Glycine max (L.) Merr.] populations: RSII and RSIII. Population means, variances, and heritabilities were estimated to determine the effects of 26 generations of random...

  6. The term structure of variance swap rates and optimal variance swap investments

    OpenAIRE

    Egloff, Daniel; Leippold, Markus; Liuren WU

    2010-01-01

    This paper performs specification analysis on the term structure of variance swap rates on the S&P 500 index and studies the optimal investment decision on the variance swaps and the stock index. The analysis identifies two stochastic variance risk factors, which govern the short and long end of the variance swap term structure variation, respectively. The highly negative estimate for the market price of variance risk makes it optimal for an investor to take short positions in a short-term va...

  7. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  8. Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance

    Science.gov (United States)

    Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.

    1995-01-01

    The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.

  9. Odor measurements according to EN 13725: A statistical analysis of variance components

    Science.gov (United States)

    Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko

    2014-04-01

    In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore

  10. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    Science.gov (United States)

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series. PMID:26540681

  11. Analysis and application of minimum variance discrete time system identification. [for adaptive control system design

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1976-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  12. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  13. APRIORI: A FORTRAN IV Computer Program to Select the Most Powerful A Priori Comparison Method in an Analysis of Variance.

    Science.gov (United States)

    Conard, Elizabeth H.; Lutz, J. Gary

    1979-01-01

    A program is described which selects the most powerful among four methods for conducting a priori comparisons in an analysis of variance: orthogonal contrasts, Scheffe's method, Dunn's method, and Dunnett's test. The program supplies the critical t ratio and the per-comparison Type I error risk for each of the relevant methods. (Author/JKS)

  14. Separation of base allele and sampling term effects gives new insights in variance component QTL analysis

    OpenAIRE

    Carlborg Örjan; Rönnegård Lars

    2007-01-01

    Abstract Background Variance component (VC) models are commonly used for Quantitative Trait Loci (QTL) mapping in outbred populations. Here, the QTL effect is given as a random effect and a critical part of the model is the relationship between the phenotypic values and the random effect. In the traditional VC model, each individual has a unique QTL effect and the relationship between these random effects is given as a covariance structure (known as the identity-by-descent (IBD) matrix). Resu...

  15. Flood damage maps: ranking sources of uncertainty with variance-based sensitivity analysis

    Science.gov (United States)

    Saint-Geours, N.; Grelot, F.; Bailly, J.-S.; Lavergne, C.

    2012-04-01

    In order to increase the reliability of flood damage assessment, we need to question the uncertainty associated with the whole flood risk modeling chain. Using a case study on the basin of the Orb River, France, we demonstrate how variance-based sensitivity analysis can be used to quantify uncertainty in flood damage maps at different spatial scales and to identify the sources of uncertainty which should be reduced first. Flood risk mapping is recognized as an effective tool in flood risk management and the elaboration of flood risk maps is now required for all major river basins in the European Union (European directive 2007/60/EC). Flood risk maps can be based on the computation of the Mean Annual Damages indicator (MAD). In this approach, potential damages due to different flood events are estimated for each individual stake over the study area, then averaged over time - using the return period of each flood event - and finally mapped. The issue of uncertainty associated with these flood damage maps should be carefully scrutinized, as they are used to inform the relevant stakeholders or to design flood mitigation measures. Maps of the MAD indicator are based on the combination of hydrological, hydraulic, geographic and economic modeling efforts: as a result, numerous sources of uncertainty arise in their elaboration. Many recent studies describe these various sources of uncertainty (Koivumäki 2010, Bales 2009). Some authors propagate these uncertainties through the flood risk modeling chain and estimate confidence bounds around the resulting flood damage estimates (de Moel 2010). It would now be of great interest to go a step further and to identify which sources of uncertainty account for most of the variability in Mean Annual Damages estimates. We demonstrate the use of variance-based sensitivity analysis to rank sources of uncertainty in flood damage mapping and to quantify their influence on the accuracy of flood damage estimates. We use a quasi

  16. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA)

    OpenAIRE

    Rakesh,; Satish Kumar

    2015-01-01

    The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG) of Mild Steel 1018 grade by Analysis of Variance (ANOVA). The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which c...

  17. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    Science.gov (United States)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  18. CAIXA: a catalogue of AGN in the XMM-Newton archive. III. Excess variance analysis

    OpenAIRE

    Ponti, G.; Papadakis, I.; Bianchi, S.; Guainazzi, M.; Matt, G.; P. Uttley(Astronomical Institute Anton Pannekoek, University of Amsterdam, The Netherlands); Bonilla, N.F.

    2012-01-01

    Context. We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10 ks in pointed observations, which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. Aims. Recently it has been suggested that the same engine might be at work in the core of every black hole (BH) accreting object. In this hypothe...

  19. Algebraic analysis approach for multibody problems 2. Variance of velocity changes

    International Nuclear Information System (INIS)

    The algebraic model (ALG) proposed by the authors has sufficiently high accuracy in calculating the motion of a test particle with all the field particles at rest. When all the field particles are moving, however, the ALG has poor prediction ability on the motion of the test particle initially at rest. None the less, the ALG approximation gives a good results for the statistical quantities, such as variance of velocity changes or the scattering cross section, for a sufficiently large number of Monte Carlo trials. (author)

  20. Empirical Analysis of Affine vs. Nonaffine Variance Specifications in Jump-Diffusion Models for Equity Indices

    OpenAIRE

    Seeger, N.J.; Rodrigues, P.J.M.; Ignatieva, K.

    2015-01-01

    This paper investigates several crucial issues that arise when modeling equity returns with stochastic variance. (i) Does the model need to include jumps even when using a nonaffine variance specification? We find that jump models clearly outperform pure stochastic volatility models. (ii) How do affine variance specifications perform when compared to nonaffine models in a jump diffusion setup? We find that nonaffine specifications outperform affine models, even after including jumps.

  1. Variance Risk Premiums and Predictive Power of Alternative Forward Variances in the Corn Market

    OpenAIRE

    Zhiguang Wang; Scott W. Fausti; Qasmi, Bashir A.

    2010-01-01

    We propose a fear index for corn using the variance swap rate synthesized from out-of-the-money call and put options as a measure of implied variance. Previous studies estimate implied variance based on Black (1976) model or forecast variance using the GARCH models. Our implied variance approach, based on variance swap rate, is model independent. We compute the daily 60-day variance risk premiums based on the difference between the realized variance and implied variance for the period from 19...

  2. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  3. Longitudinal Analysis of Residual Feed Intake in Mink using Random Regression with Heterogeneous Residual Variance

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik;

    Heritability of residual feed intake (RFI) increased from low to high over the growing period in male and female mink. The lowest heritability for RFI (male: 0.04 ± 0.01 standard deviation (SD); female: 0.05 ± 0.01 SD) was in early and the highest heritability (male: 0.33 ± 0.02; female: 0.34 ± 0.......02 SD) was achieved at the late growth stages. The genetic correlation between different growth stages for RFI showed a high association (0.91 to 0.98) between early and late growing periods. However, phenotypic correlations were lower from 0.29 to 0.50. The residual variances were substantially higher...... at the end compared to the early growing period suggesting that heterogeneous residual variance should be considered for analyzing feed efficiency data in mink. This study suggests random regression methods are suitable for analyzing feed efficiency and that genetic selection for RFI in mink is...

  4. Explaining the Variance of Price Dividend Ratios

    OpenAIRE

    Cochrane, John H.

    1989-01-01

    This paper presents a bound on the variance of the price-dividend ratio and a decomposition of the variance of the price-dividend ratio into components that reflect variation in expected future discount rates and variation in expected future dividend growth. Unobserved discount rates needed to make the variance bound and variance decomposition hold are characterized, and the variance bound and variance decomposition are tested for several discount rate models, including the consumption based ...

  5. FMRI group analysis combining effect estimates and their variances

    OpenAIRE

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Michael S Beauchamp; Cox, Robert W.

    2011-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an a...

  6. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    International Nuclear Information System (INIS)

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  7. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    OpenAIRE

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general...

  8. On variance estimate for covariate adjustment by propensity score analysis.

    Science.gov (United States)

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  9. The benefit of regional diversification of cogeneration investments in Europe: A mean-variance portfolio analysis

    International Nuclear Information System (INIS)

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. - Research highlights: →Preconditions for CHP investments differ significantly between the EU member states. →Regional diversification of CHP investments can reduce the total portfolio risk. →Risk reduction depends on the chosen CHP technology.

  10. Positive estimation of the between-group variance component in one-way anova and meta-analysis

    OpenAIRE

    Hartung, Joachim; Makambi, Kepher H.

    2000-01-01

    Positive estimators of the between-group (between-study) variance are proposed. Explicit variance formulae for the estimators are given and approximate confidence intervals for the between-group variance are constructed, as our proposal to a long outstanding problem. By Monte Carlo simulation, the bias and standard deviation of the proposed estimators are compared with the truncated versions of the maxi- mum likelihood (ML) estimator, restricted maximum likelihood (REML) estimator and a (late...

  11. Comments on the statistical analysis of excess variance in the COBE differential microwave radiometer maps

    Science.gov (United States)

    Wright, E. L.; Smoot, G. F.; Kogut, A.; Hinshaw, G.; Tenorio, L.; Lineweaver, C.; Bennett, C. L.; Lubin, P. M.

    1994-01-01

    Cosmic anisotrophy produces an excess variance sq sigma(sub sky) in the Delta maps produced by the Differential Microwave Radiometer (DMR) on cosmic background explorer (COBE) that is over and above the instrument noise. After smoothing to an effective resolution of 10 deg, this excess sigma(sub sky)(10 deg), provides an estimate for the amplitude of the primordial density perturbation power spectrum with a cosmic uncertainty of only 12%. We employ detailed Monte Carlo techniques to express the amplitude derived from this statistic in terms of the universal root mean square (rms) quadrupole amplitude, (Q sq/RMS)(exp 0.5). The effects of monopole and dipole subtraction and the non-Gaussian shape of the DMR beam cause the derived (Q sq/RMS)(exp 0.5) to be 5%-10% larger than would be derived using simplified analytic approximations. We also investigate the properties of two other map statistics: the actual quadrupole and the Boughn-Cottingham statistic. Both the sigma(sub sky)(10 deg) statistic and the Boughn-Cottingham statistic are consistent with the (Q sq/RMS)(exp 0.5) = 17 +/- 5 micro K reported by Smoot et al. (1992) and Wright et al. (1992).

  12. Spectral and chromatographic fingerprinting with analysis of variance-principal component analysis (ANOVA-PCA): a useful tool for differentiating botanicals and characterizing sources of variance

    Science.gov (United States)

    Objectives: Spectral fingerprints, acquired by direct injection (no separation) mass spectrometry (DI-MS) or liquid chromatography with UV detection (HPLC), in combination with ANOVA-PCA, were used to differentiate 15 powders of botanical materials. Materials and Methods: Powders of 15 botanical mat...

  13. The Variance of Language in Different Contexts

    Institute of Scientific and Technical Information of China (English)

    申一宁

    2012-01-01

    language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.

  14. SU-E-T-41: Analysis of GI Dose Variability Due to Intrafraction Setup Variance

    International Nuclear Information System (INIS)

    Purpose: Proton SBRT (stereotactic body radiation therapy) can be an effective modality for treatment of gastrointestinal tumors, but limited in practice due to sensitivity with respect to variation in the RPL (radiological path length). Small, intrafractional shifts in patient anatomy can lead to significant changes in the dose distribution. This study describes a tool designed to visualize uncertainties in radiological depth in patient CT's and aid in treatment plan design. Methods: This project utilizes the Shadie toolkit, a GPU-based framework that allows for real-time interactive calculations for volume visualization. Current SBRT simulation practice consists of a serial CT acquisition for the assessment of inter- and intra-fractional motion utilizing patient specific immobilization systems. Shadie was used to visualize potential uncertainties, including RPL variance and changes in gastric content. Input for this procedure consisted of two patient CT sets, contours of the desired organ, and a pre-calculated dose. In this study, we performed rigid registrations between sets of 4DCT's obtained from a patient with varying setup conditions. Custom visualizations are written by the user in Shadie, permitting one to create color-coded displays derived from a calculation along each ray. Results: Serial CT data acquired on subsequent days was analyzed for variation in RPB and gastric content. Specific shaders were created to visualize clinically relevant features, including RPL (radiological path length) integrated up to organs of interest. Using pre-calculated dose distributions and utilizing segmentation masks as additional input allowed us to further refine the display output from Shadie and create tools suitable for clinical usage. Conclusion: We have demonstrated a method to visualize potential uncertainty for intrafractional proton radiotherapy. We believe this software could prove a useful tool to guide those looking to design treatment plans least

  15. MNEs and Industrial Structure in Host Countries:A Mean Variance Analysis of Ireland’s Manufacturing Sector

    OpenAIRE

    Colm Kearney; Frank Barry

    2005-01-01

    We use mean-variance analysis to demonstrate the importance of a hitherto neglected benefit of enticing MNEs to locate in small and medium-sized countries. During the 25 years from 1974 to 1999, over 1000 foreign MNEs have located in Ireland, and they have raised their share of all manufacturing jobs in the country from one-third to one-half. The foreign MNEs tend to operate in high-technology sectors, and they grow faster with greater volatility than the traditional low-technology indigenous...

  16. An analysis of the factors generating the variance between the budgeted and actual operating results of the Naval Aviation Depot at North Island, California

    OpenAIRE

    Curran, Thomas; Schimpff, Joshua J.

    2008-01-01

    For six of the past eight years, naval aviation depot-level maintenances activities have encountered operating losses that were not anticipated in the Navy Working Capital Fund (NWCF) budgets. These unanticipated losses resulted in increases or surcharges to the stabilized rates as an offset. This project conducts a variance analysis to uncover possible causes of the unanticipated losses. The variance analysis between budgeted (projected) and actual financial results was performed on fina...

  17. Regression Computer Programs for Setwise Regression and Three Related Analysis of Variance Techniques.

    Science.gov (United States)

    Williams, John D.; Lindem, Alfred C.

    Four computer programs using the general purpose multiple linear regression program have been developed. Setwise regression analysis is a stepwise procedure for sets of variables; there will be as many steps as there are sets. Covarmlt allows a solution to the analysis of covariance design with multiple covariates. A third program has three…

  18. 使用SPSS软件进行多因素方差分析%Application of SPSS Software in Multivariate Analysis of Variance

    Institute of Scientific and Technical Information of China (English)

    龚江; 石培春; 李春燕

    2012-01-01

    以两因素完全随机有重复的试验为例,阐述用SPSS软进行方差分析的详细过程,包括数据的输入、变异来源的分析,方差分析结果,以及显著性检验,最后还对方差分析注意事项进行分析,为科技工作者使用SPSS软进方差分析提供参考。%An example about two factors multiple completely random design analysis of variance was given and the detailed process of analysis of variance in SPSS software was elaborated,including the data input,he source analysis of the variance,the result of analysis of variance,the test of significance,etc.At last,precautions on the analysis of variance with SPSS software were given,providing references to the analysis of variance with SPSS software for scientific research workers.

  19. MEASURING DRIVERS’ EFFECT IN A COST MODEL BY MEANS OF ANALYSIS OF VARIANCE

    Directory of Open Access Journals (Sweden)

    Maria Elena Nenni

    2013-01-01

    Full Text Available In this study the author goes through with the analysis of a cost model developed for Integrated Logistic Support (ILS activities. By means of ANOVA the evaluation of impact and interaction among cost drivers is done. The predominant importance of organizational factors compared to technical ones is definitely demonstrated. Moreover the paper provides researcher and practitioners with useful information to improve the cost model as well as for budgeting and financial planning of ILS activities.

  20. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA

    Directory of Open Access Journals (Sweden)

    Rakesh

    2015-11-01

    Full Text Available The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG of Mild Steel 1018 grade by Analysis of Variance (ANOVA. The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which consist three input parameters. Root gap has greatest effect on tensile strength followed by welding current and arc voltage. Arc voltage has greatest effect on hardness followed by root gap and welding current. Weld metal consists of fine grains of ferrite and pearlite.

  1. Cyclostationary analysis with logarithmic variance stabilisation

    Science.gov (United States)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  2. Variance associated with subject velocity and trial repetition during force platform gait analysis in a heterogeneous population of clinically normal dogs.

    Science.gov (United States)

    Hans, Eric C; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter

    2014-12-01

    Factors that contribute to variance in ground reaction forces (GRF) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, it was hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from 20 normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5 m/s(2)). Variance effects from 12 trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P trotting dogs. This concept is important for clinical trial design. PMID:25457264

  3. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

    DEFF Research Database (Denmark)

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M;

    2014-01-01

    Exploratory (i.e., voxelwise) spatial methods are commonly used in neuroimaging to identify areas that show an effect when a region-of-interest (ROI) analysis cannot be performed because no strong a priori anatomical hypothesis exists. However, noise at a single voxel is much higher than noise in a...... ROI making noise management critical to successful exploratory analysis. This work explores how preprocessing choices affect the bias and variability of voxelwise kinetic modeling analysis of brain positron emission tomography (PET) data. These choices include the use of volume- or cortical surface......-based smoothing, level of smoothing, use of voxelwise partial volume correction (PVC), and PVC masking threshold. PVC was implemented using the Muller-Gartner method with the masking out of voxels with low gray matter (GM) partial volume fraction. Dynamic PET scans of an antagonist serotonin-4 receptor...

  4. Analysis of NDVI variance across landscapes and seasons allows assessment of degradation and resilience to shocks in Mediterranean dry ecosystems

    Science.gov (United States)

    liniger, hanspeter; jucker riva, matteo; schwilch, gudrun

    2016-04-01

    Mapping and assessment of desertification is a primary basis for effective management of dryland ecosystems. Vegetation cover and biomass density are key elements for the ecological functioning of dry ecosystem, and at the same time an effective indicator of desertification, land degradation and sustainable land management. The Normalized Difference Vegetation Index (NDVI) is widely used to estimate the vegetation density and cover. However, the reflectance of vegetation and thus the NDVI values are influenced by several factors such as type of canopy, type of land use and seasonality. For example low NDVI values could be associated to a degraded forest, to a healthy forest under dry climatic condition, to an area used as pasture, or to an area managed to reduce the fuel load. We propose a simple method to analyse the variance of NDVI signal considering the main factors that shape the vegetation. This variance analysis enables us to detect and categorize degradation in a much more precise way than simple NDVI analysis. The methodology comprises identifying homogeneous landscape areas in terms of aspect, slope, land use and disturbance regime (if relevant). Secondly, the NDVI is calculated from Landsat multispectral images and the vegetation potential for each landscape is determined based on the percentile (highest 10% value). Thirdly, the difference between the NDVI value of each pixel and the potential is used to establish degradation categories . Through this methodology, we are able to identify realistic objectives for restoration, allowing a targeted choice of management options for degraded areas. For example, afforestation would only be done in areas that show potential for forest growth. Moreover, we can measure the effectiveness of management practices in terms of vegetation growth across different landscapes and conditions. Additionally, the same methodology can be applied to a time series of multispectral images, allowing detection and quantification of

  5. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

    OpenAIRE

    VÎLCU Adrian; HRISTIAN Liliana; BORDEIANU Demetra Lăcrămioara

    2014-01-01

    The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA) can be employed. ANCOVA shows the corre...

  6. Models with Time-varying Mean and Variance: A Robust Analysis of U.S. Industrial Production

    OpenAIRE

    Charles S. Bos; Koopman, Siem Jan

    2010-01-01

    Many seasonal macroeconomic time series are subject to changes in their means and variances over a long time horizon. In this paper we propose a general treatment for the modelling of time-varying features in economic time series. We show that time series models with mean and variance functions depending on dynamic stochastic processes can be sufficiently robust against changes in their dynamic properties. We further show that the implementation of the treatment is relatively straightforward....

  7. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  8. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

  9. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  10. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  11. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

    Directory of Open Access Journals (Sweden)

    VÎLCU Adrian

    2014-05-01

    Full Text Available The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA can be employed. ANCOVA shows the correlation between a dependent variable and the covariate independent variables and removes the variability from the dependent variable that can be accounted by the covariates. The independent and dependent variable structures for Multiple Regression, factorial ANOVA and ANCOVA tests are similar. ANCOVA is differentiated from the other two in that it is used when the researcher wants to neutralize the effect of a continuous independent variable in the experiment. The researcher may simply not be interested in the effect of a given independent variable when performing a study. Another situation where ANCOVA should be applied is when an independent variable has a strong correlation with the dependent variable, but does not interact with other independent variables in predicting the dependent variable’s value. ANCOVA is used to neutralize the effect of the more powerful, non-interacting variable. Without this intervention measure, the effects of interacting independent variables can be clouded

  12. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    International Nuclear Information System (INIS)

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 104 to 106 times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of

  13. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    Science.gov (United States)

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

  14. Budget Variance Analysis of a Departmentwide Implementation of a PACS at a Major Academic Medical Center

    OpenAIRE

    Reddy, Arra Suresh; Loh, Shaun; Kane, Robert A.

    2006-01-01

    In this study, the costs and cost savings associated with departmentwide implementation of a picture archiving and communication system (PACS) as compared to the projected budget at the time of inception were evaluated. An average of $214,460 was saved each year with a total savings of $1,072,300 from 1999 to 2003, which is significantly less than the $2,943,750 projected savings. This discrepancy can be attributed to four different factors: (1) overexpenditures, (2) insufficient cost savings...

  15. The Benefit of Regional Diversification of Cogeneration Investments in Europe: A Mean-Variance Portfolio Analysis

    OpenAIRE

    Westner, Günther; Madlener, Reinhard

    2009-01-01

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standar...

  16. Budget variance analysis of a departmentwide implementation of a PACS at a major academic medical center.

    Science.gov (United States)

    Reddy, Arra Suresh; Loh, Shaun; Kane, Robert A

    2006-01-01

    In this study, the costs and cost savings associated with departmentwide implementation of a picture archiving and communication system (PACS) as compared to the projected budget at the time of inception were evaluated. An average of $214,460 was saved each year with a total savings of $1,072,300 from 1999 to 2003, which is significantly less than the $2,943,750 projected savings. This discrepancy can be attributed to four different factors: (1) overexpenditures, (2) insufficient cost savings, (3) unanticipated costs, and (4) project management issues. Although the implementation of PACS leads to cost savings, actual savings will be much lower than expected unless extraordinary care is taken when devising the budget. PMID:16946989

  17. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  18. Two-dimensional finite-element temperature variance analysis

    Science.gov (United States)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  19. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  20. Variance associated with the use of relative velocity for force platform gait analysis in a heterogeneous population of clinically normal dogs.

    Science.gov (United States)

    Volstad, Nicola; Nemke, Brett; Muir, Peter

    2016-01-01

    Factors that contribute to variance in ground reaction forces (GRFs) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population, it may be preferable to minimize data variance and efficiently perform force platform gait analysis by evaluation of each individual dog at its preferred velocity, such that dogs are studied at a similar relative velocity (V*). Data from 27 normal dogs were obtained including withers and shoulder height. Each dog was trotted across a force platform at its preferred velocity, with controlled acceleration (±0.5 m/s(2)). V* ranges were created for withers and shoulder height. Variance effects from 12 trotting velocity ranges and associated V* ranges were examined using repeated-measures analysis-of-covariance. Mean bodyweight was 24.4 ± 7.4 kg. Individual dog, velocity, and V* significantly influenced GRF (P <0.001). Trial number significantly influenced thoracic limb peak vertical force (PVF) (P <0.001). Limb effects were not significant. The magnitude of variance effects was greatest for the dog effect. Withers height V* was associated with small GRF variance. Narrow velocity ranges typically captured a smaller percentage of trials and were not consistently associated with lower variance. The withers height V* range of 0.6-1.05 captured the largest proportion of trials (95.9 ± 5.9%) with no significant effects on PVF and vertical impulse. The use of individual velocity ranges derived from a withers height V* range of 0.6-1.05 will account for population heterogeneity while minimizing exacerbation of lameness in clinical trials studying lame dogs by efficient capture of valid trials. PMID:26631945

  1. Common, Specific, and Error Variance Components of Factor Models

    OpenAIRE

    Raffalovich, Lawrence E.; George W. Bohrnstedt

    1987-01-01

    In the classic factor-analysis model, the total variance of an item is decomposed into common, specific, and random error components. Since with cross-sectional data it is not possible to estimate the specific variance component, specific and random error variance are summed to the item's uniqueness. This procedure imposes a downward bias to item reliability estimates, however, and results in correlated item uniqueness in longitudinal models. In this article, we describe a method for estimati...

  2. Aspects of First Year Statistics Students' Reasoning When Performing Intuitive Analysis of Variance: Effects of Within- and Between-Group Variability

    Science.gov (United States)

    Trumpower, David L.

    2015-01-01

    Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…

  3. Analysis of Variance in Vocabulary Learning Strategies Theory and Practice: A Case Study in Libya

    Directory of Open Access Journals (Sweden)

    Salma H M Khalifa

    2016-06-01

    Full Text Available The present study is an outcome of a concern for the teaching of English as a foreign language (EFL in Libyan schools. Learning of a foreign language is invariably linked to learners building a good repertoire of vocabulary of the target language, which takes us to the theory and practice of imparting training in vocabulary learning strategies (VLSs to learners. The researcher observed that there exists a divergence in theoretical knowledge of VLSs and practically training learners in using the strategies in EFL classes in Libyan schools. To empirically examine the situation, a survey was conducted with secondary school English teachers. The study discusses the results of the survey. The results show that teachers of English in secondary school in Libya are either not aware of various vocabulary learning strategies, or if they are, they do not impart training in all VLSs as they do not realize that to achieve good results in language learning, a judicious use of all VLSs is required. Though the study was conducted on a small scale, the results are highly encouraging.Keywords: vocabulary learning strategies, vocabulary learning theory, teaching of vocabulary learning strategies

  4. A variance analysis of the capacity displaced by wind energy in Europe

    DEFF Research Database (Denmark)

    Giebel, Gregor

    2007-01-01

    Wind energy generation distributed all over Europe is less variable than generation from a single region. To analyse the benefits of distributed generation, the whole electrical generation system of Europe has been modelled including varying penetrations of wind power. The model is chronologically...... detail into a longer-term context. The results are that wind energy can contribute more than 20% of the European demand without significant changes in the system and can replace conventional capacity worth about 10% of the installed wind power capacity. The long-term reference shows that the analysed...... simulating the scheduling of the European power plants to cover the demand at every hour of the year. The wind power generation was modelled using wind speed measurements from 60 meteorological stations, for 1 year. The distributed wind power also displaces fossil-fuelled capacity. However, every assessment...

  5. Model selection and analysis tools in response surface modeling of the process mean and variance

    OpenAIRE

    Griffiths, Kristi L.

    1995-01-01

    Product improvement is a serious issue facing industry today. And while response surface methods have been developed which address the process mean involved in improving the product there has been little research done on the process variability. Lack of quality in a product can be attributed to its inconsistency in performance thereby highlighting the need for a methodology which addresses process variability. The key to working with the process variability comes in the hand...

  6. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...... genetic variance. However, in Holstein cattle, a group of genes that explained close to none of the genetic variance could also have a high likelihood ratio. This is still a good separation of signal and noise, but instead of capturing the genetic signal in the marker set being tested, we are instead...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  7. FMRI group analysis combining effect estimates and their variances.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W

    2012-03-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  8. A two-dimensional analysis of variance for the simultaneous quantitative determination of estradiol and its 4-14C isotopically labelled analogue with selected ion monitoring

    International Nuclear Information System (INIS)

    Estradiol-17β and [4-14C]estradiol-17β in total mass amounts of 0, 10 and 20 pg per analysis in the presence of 1000 pg[2H8]estradiol as an internal standard were simultaneously quantitatively determined by gas chromatography mass spectrometry using the selected ion monitoring technique. The experiment was designed to investigate the two-dimensional error structure which facilitates the study and comparison of the variances and showed that in this concentration range the variance due to sample preparation is smaller than that due to the device. The deviations due to the device were shown to have a two-dimensional normal distribution. (author)

  9. Simultaneous optimal estimates of fixed effects and variance components in the mixed model

    Institute of Scientific and Technical Information of China (English)

    WU; Mixia; WANG; Songgui

    2004-01-01

    For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.

  10. Power generation mixes evaluation applying the mean-variance theory. Analysis of the choices for Japanese energy policy

    International Nuclear Information System (INIS)

    Optimal Japanese power generation mixes in 2030, for both economic efficiency and energy security (less cost variance risk), are evaluated by applying the mean-variance portfolio theory. Technical assumptions, including remaining generation capacity out of the present generation mix, future load duration curve, and Research and Development risks for some renewable energy technologies in 2030, are taken into consideration as either the constraints or parameters for the evaluation. Efficiency frontiers, which consist of the optimal generation mixes for several future scenarios, are identified, taking not only power balance but also capacity balance into account, and are compared with three power generation mixes submitted by the Japanese government as 'the choices for energy and environment'. (author)

  11. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    Science.gov (United States)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the

  12. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A; Kyvik, Kirsten O

    2007-01-01

    been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect......OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...

  13. Electrocardiogram signal variance analysis in the diagnosis of coronary artery disease--a comparison with exercise stress test in an angiographically documented high prevalence population.

    Science.gov (United States)

    Nowak, J; Hagerman, I; Ylén, M; Nyquist, O; Sylvén, C

    1993-09-01

    Variance electrocardiography (variance ECG) is a new resting procedure for detection of coronary artery disease (CAD). The method measures variability in the electrical expression of the depolarization phase induced by this disease. The time-domain analysis is performed on 220 cardiac cycles using high-fidelity ECG signals from 24 leads, and the phase-locked temporal electrical heterogeneity is expressed as a nondimensional CAD index (CAD-I) with the values of 0-150. This study compares the diagnostic efficiency of variance ECG and exercise stress test in a high prevalence population. A total of 199 symptomatic patients evaluated with coronary angiography was subjected to variance ECG and exercise test on a bicycle ergometer as a continuous ramp. The discriminant accuracy of the two methods was assessed employing the receiver operating characteristic curves constructed by successive consideration of several CAD-I cutpoint values and various threshold criteria based on ST-segment depression exclusively or in combination with exertional chest pain. Of these patients, 175 with CAD (> or = 50% luminal stenosis in 1 + major epicardial arteries) presented a mean CAD-I of 88 +/- 22, compared with 70 +/- 21 in 24 nonaffected patients (p or = 70, compared with ST-segment depression > or = 1 mm combined with exertional chest pain, the overall sensitivity of variance ECG was significantly higher (p < 0.01) than that of exercise test (79 vs. 48%). When combined, the two methods identified 93% of coronary angiography positive cases. Variance ECG is an efficient diagnostic method which compares favorably with exercise test for detection of CAD in high prevalence population. PMID:8242912

  14. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    International Nuclear Information System (INIS)

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods

  15. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    Science.gov (United States)

    Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa

    2014-07-01

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  16. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean-variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums....... Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...... attractiveness for investment, because they expose investors to less market risk. These risk implications should be considered when designing policy schemes....

  17. The variance of the adjusted Rand index.

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence

    2016-06-01

    For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693

  18. An effective approximation for variance-based global sensitivity analysis

    International Nuclear Information System (INIS)

    The paper presents a fairly efficient approximation for the computation of variance-based sensitivity measures associated with a general, n-dimensional function of random variables. The proposed approach is based on a multiplicative version of the dimensional reduction method (M-DRM), in which a given complex function is approximated by a product of low dimensional functions. Together with the Gaussian quadrature, the use of M-DRM significantly reduces the computation effort associated with global sensitivity analysis. An important and practical benefit of the M-DRM is the algebraic simplicity and closed-form nature of sensitivity coefficient formulas. Several examples are presented to show that the M-DRM method is as accurate as results obtained from simulations and other approximations reported in the literature

  19. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.522 Content of request for variance. The agency's request for a variance must include—...

  20. Inhomogeneity-induced variance of cosmological parameters

    CERN Document Server

    Wiegand, Alexander

    2011-01-01

    Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. So, how can local measurements (at the 100 Mpc scale) be used to determine global cosmological parameters (defined at the 10 Gpc scale)? We use Buchert's averaging formalism and determine a set of locally averaged cosmological parameters in the context of the flat Lambda cold dark matter model. We calculate their ensemble means (i.e. their global values) and variances (i.e. their cosmic variances). We apply our results to typical survey geometries and focus on the study of the effects of local fluctuations of the curvature parameter. By this means we show, that in the linear regime cosmological backreaction and averaging can be reformulated as the issue of cosmic variance. The cosmic variance is found largest for the curvature parameter and discuss some of its consequences. We further propose to use the observed variance of cosmological parameters t...

  1. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean–variance approach

    International Nuclear Information System (INIS)

    Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean–variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums. Using cash flow analysis, Monte Carlo simulations and mean–variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feed-in tariffs systematically require lower direct support levels than feed-in premiums while providing the same attractiveness for investment, because they expose investors to less market risk. These risk implications should be considered when designing policy schemes. - Highlights: • Mean–variance portfolio approach to analyse risk implications of policy instruments. • We show that feed-in tariffs require lower support levels than feed-in premiums. • This systematic effect stems from the lower exposure of investors to market risk. • We created a stochastic model for an exemplary offshore wind park in West Denmark. • We quantify risk-return, Sharpe Ratios and differences in required support levels

  2. A Monte Carlo Study of Seven Homogeneity of Variance Tests

    Directory of Open Access Journals (Sweden)

    Howard B. Lee

    2010-01-01

    Full Text Available Problem statement: The decision by SPSS (now PASW to use the unmodified Levene test to test homogeneity of variance was questioned. It was compared to six other tests. In total, seven homogeneity of variance tests used in Analysis Of Variance (ANOVA were compared on robustness and power using Monte Carlo studies. The homogeneity of variance tests were (1 Levene, (2 modified Levene, (3 Z-variance, (4 Overall-Woodward Modified Z-variance, (5 O’Brien, (6 Samiuddin Cube Root and (7 F-Max. Approach: Each test was subjected to Monte Carlo analysis through different shaped distributions: (1 normal, (2 platykurtic, (3 leptokurtic, (4 moderate skewed and (5 highly skewed. The Levene Test is the one used in all of the latest versions of SPSS. Results: The results from these studies showed that the Levene Test is neither the best nor worst in terms of robustness and power. However, the modified Levene Test showed very good robustness when compared to the other tests but lower power than other tests. The Samiuddin test is at its best in terms of robustness and power when the distribution is normal. The results of this study showed the strengths and weaknesses of the seven tests. Conclusion/Recommendations: No single test outperformed the others in terms of robustness and power. The authors recommend that kurtosis and skewness indices be presented in statistical computer program packages such as SPSS to guide the data analyst in choosing which test would provide the highest robustness and power.

  3. Genomic prediction of breeding values using previously estimated SNP variances

    NARCIS (Netherlands)

    Calus, M.P.L.; Schrooten, C.; Veerkamp, R.F.

    2014-01-01

    Background Genomic prediction requires estimation of variances of effects of single nucleotide polymorphisms (SNPs), which is computationally demanding, and uses these variances for prediction. We have developed models with separate estimation of SNP variances, which can be applied infrequently, and

  4. Inhomogeneity-induced variance of cosmological parameters

    Science.gov (United States)

    Wiegand, A.; Schwarz, D. J.

    2012-02-01

    Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.

  5. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...... additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

  6. The relation of the Allan- and Delta-variance to the continuous wavelet transform

    OpenAIRE

    Zielinsky, M.; Stutzki, J.

    1999-01-01

    This paper is understood as a supplement to the paper by [Stutzki et al, 1998], where we have shown the usefulness of the Allan-variance and its higher dimensional generalization, the Delta-variance, for the characterization of molecular cloud structures. In this study we present the connection between the Allan- and Delta-variance and a more popular structure analysis tool: the wavelet transform. We show that the Allan- and Delta-variances are the variances of wavelet transform coefficients.

  7. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2009-11-01

    Full Text Available Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3/(4pisigma(2 seconds where mu is the mean period of an oscillator in seconds and sigma(2 its variance in seconds(2. We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

  8. ROBUST ESTIMATION OF VARIANCE COMPONENTS MODEL

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Classical least squares estimation consists of minimizing the sum of the squared residuals of observation. Many authors have produced more robust versions of this estimation by replacing the square by something else, such as the absolute value. These approaches have been generalized, and their robust estimations and influence functions of variance components have been presented. The results may have wide practical and theoretical value.

  9. LOCAL MEDIAN ESTIMATION OF VARIANCE FUNCTION

    Institute of Scientific and Technical Information of China (English)

    杨瑛

    2004-01-01

    This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong convergence rates of the proposed estimators are obtained. Simulation results are given to show the performance of the proposed methods.

  10. Lorenz Dominance and the Variance of Logarithms.

    OpenAIRE

    Ok, Efe A.; Foster, James

    1997-01-01

    The variance of logarithms is a widely used inequality measure which is well known to disagree with the Lorenz criterion. Up to now, the extent and likelihood of this inconsistency were thought to be vanishingly small. We find that this view is mistaken : the extent of the disgreement can be extremely large; the likelihood is far from negligible.

  11. Linear transformations of variance/covariance matrices

    NARCIS (Netherlands)

    Parois, P.J.A.; Lutz, M.

    2011-01-01

    Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance

  12. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  13. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...

  14. A multi-variance analysis in the time domain

    Science.gov (United States)

    Walter, Todd

    1993-01-01

    Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.

  15. Longitudinal analysis of residual feed intake and BW in mink using random regression with heterogeneous residual variance.

    Science.gov (United States)

    Shirali, M; Nielsen, V H; Møller, S H; Jensen, J

    2015-10-01

    The aim of this study was to determine the genetic background of longitudinal residual feed intake (RFI) and BW gain in farmed mink using random regression methods considering heterogeneous residual variances. The individual BW was measured every 3 weeks from 63 to 210 days of age for 2139 male+female pairs of juvenile mink during the growing-furring period. Cumulative feed intake was calculated six times with 3-week intervals based on daily feed consumption between weighing's from 105 to 210 days of age. Genetic parameters for RFI and BW gain in males and females were obtained using univariate random regression with Legendre polynomials containing an animal genetic effect and permanent environmental effect of litter along with heterogeneous residual variances. Heritability estimates for RFI increased with age from 0.18 (0.03, posterior standard deviation (PSD)) at 105 days of age to 0.49 (0.03, PSD) and 0.46 (0.03, PSD) at 210 days of age in male and female mink, respectively. The heritability estimates for BW gain increased with age and had moderate to high range for males (0.33 (0.02, PSD) to 0.84 (0.02, PSD)) and females (0.35 (0.03, PSD) to 0.85 (0.02, PSD)). RFI estimates during the growing period (105 to 126 days of age) showed high positive genetic correlations with the pelting RFI (210 days of age) in male (0.86 to 0.97) and female (0.92 to 0.98). However, phenotypic correlations were lower from 0.47 to 0.76 in males and 0.61 to 0.75 in females. Furthermore, BW records in the growing period (63 to 126 days of age) had moderate (male: 0.39, female: 0.53) to high (male: 0.87, female: 0.94) genetic correlations with pelting BW (210 days of age). The result of current study showed that RFI and BW in mink are highly heritable, especially at the late furring period, suggesting potential for large genetic gains for these traits. The genetic correlations suggested that substantial genetic gain can be obtained by only considering the RFI estimate and BW at pelting

  16. Deriving dispersional and scaled windowed variance analyses using the correlation function of discrete fractional Gaussian noise

    OpenAIRE

    RAYMOND, GARY M.; Bassingthwaighte, James B.

    1999-01-01

    Methods for estimating the fractal dimension, D, or the related Hurst coefficient, H, for a one-dimensional fractal series include Hurst’s method of rescaled range analysis, spectral analysis, dispersional analysis, and scaled windowed variance analysis (which is related to detrended fluctuation analysis). Dispersional analysis estimates H by using the variance of the grouped means of discrete fractional Gaussian noise series (DfGn). Scaled windowed variance analysis estimates H using the mea...

  17. The Theory of Variances in Equilibrium Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  18. The Theory of Variances in Equilibrium Reconstruction

    International Nuclear Information System (INIS)

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature

  19. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  20. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    Science.gov (United States)

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906

  1. 42 CFR 456.525 - Request for renewal of variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...

  2. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    OpenAIRE

    Shanshan Gu; Jianye Liu; Qinghua Zeng; Shaojun Feng; Pin Lv

    2015-01-01

    To solve the problem that dynamic Allan variance (DAVAR) with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG) signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then th...

  3. Use of statistical methods of variance analysis in the qualification of the suppliers of nuclear fuel components

    International Nuclear Information System (INIS)

    The purpose of quality assurance of the suppliers of materials used in the fabrication of nuclear fuel is to determine, as accurately as possible, the capacity of a supplier to provide a product meeting certain pre-specified requirements. The outcome of an assessment obviously depends on the extent to which the requirements are met. If the matter rests there, however, part of the information available remains unused, for the relative influence of the various factors capable of affecting fabrication quality is not considered. This kind of problem can be dealt with effectively by statistical experiment planning. After briefly recapitulating the definitions and basic principles governing the statistical use of experiment plans, the author gives two examples taken from the fabrication of PWR fuels: (1) A new UO2 pellet fabrication operation involves certain peculiarities whose influence on the final quality of the product needs to be known. The problems raised by the use of shaft-type batch-furnaces are examined by organizing experiment plan tests. The conclusions drawn from the statistical analysis reveal certain sensitive points in the fabrication operation (especially furnace homogeneity). Special precautions can be taken on the basis of these conclusions. (2) Fuel assembly frames are made by an automatic seaming machine. The assessment of this equipment involves a systematic study of expansion dimensions by another type of experiment plan. The results of the statistical analysis show that it is the seaming tools which affect the dimensions most. The most critical tools are identified. Special inspection measures can be taken to check their wear. The author shows the importance of this type of study, which, going beyond mere checking for conformity, provides a better knowledge of and enables one to specify the critical fabrication factors. It is obvious that, with such better knowledge, one can guarantee the final quality of the product more effectively. (author)

  4. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    Science.gov (United States)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  5. Variance analysis and linear contracts in agencies with distorted performance measures

    OpenAIRE

    Budde, Jörg

    2007-01-01

    This paper investigates the role of variance analysis procedures in aligning objectives under the condition of distorted performance measurement. A riskneutral agency with linear contracts is analyzed, whereby the agent receives postcontract, pre-decision information on his productivity. If the performance measure is informative with respect to the agent’s marginal product concerning the principal’s objective, variance investigation can alleviate effort misallocation. These results carry ...

  6. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  7. Using a variance-based sensitivity analysis for analyzing the relation between measurements and unknown parameters of a physical model

    Science.gov (United States)

    Zhao, J.; Tiede, C.

    2011-05-01

    An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.

  8. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  9. Variance reduction methods for simulation of densities on Wiener space

    OpenAIRE

    Kohatsu, Arturo; Pettersson, Roger

    2002-01-01

    We develop a general error analysis framework for the Monte Carlo simulation of densities for functionals in Wiener space. We also study variance reduction methods with the help of Malliavin derivatives. For this, we give some general heuristic principles which are applied to diffusion processes. A comparison with kernel density estimates is made.

  10. Age and Gender Differences Associated with Family Communication and Materialism among Young Urban Adult Consumers in Malaysia: A One-Way Analysis of Variance (ANOVA

    Directory of Open Access Journals (Sweden)

    Eric V. Bindah

    2012-11-01

    Full Text Available The main purpose of this study is to examine the differences in age and gender among the various types of family communication patterns that takes place at home among young adult consumers. It is also an attempt to examine if there are differences in age and gender on the development of materialistic values in Malaysia. This paper briefly conceptualizes the family communication processes based on existing literature to illustrate the association between family communication patterns and materialism. This study takes place in Malaysia, a country in the Southeast Asia embracing a multi-ethnic and multi-cultural society. Preliminary statistical procedures were employed to examine possible significant group differences in family communication and materialism based on various age group and gender among Malaysian consumers. A one-way analysis of variance was utilised to determine the significant differences in terms of age and gender with respect to their responses on the various measures. When there were significant differences, Post Hoc Tests (Scheffe were used to determine the particular groups which differed significantly within a significant overall one-way analysis of variance. The implications, significance and limitations of the study are discussed as a concluding remark.

  11. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  12. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  13. Assessment of heterogeneity of residual variances using changepoint techniques

    Directory of Open Access Journals (Sweden)

    Toro Miguel A

    2000-07-01

    Full Text Available Abstract Several studies using test-day models show clear heterogeneity of residual variance along lactation. A changepoint technique to account for this heterogeneity is proposed. The data set included 100 744 test-day records of 10 869 Holstein-Friesian cows from northern Spain. A three-stage hierarchical model using the Wood lactation function was employed. Two unknown changepoints at times T1 and T2, (0 T1 T2 tmax, with continuity of residual variance at these points, were assumed. Also, a nonlinear relationship between residual variance and the number of days of milking t was postulated. The residual variance at a time t( in the lactation phase i was modeled as: for (i = 1, 2, 3, where λι is a phase-specific parameter. A Bayesian analysis using Gibbs sampling and the Metropolis-Hastings algorithm for marginalization was implemented. After a burn-in of 20 000 iterations, 40 000 samples were drawn to estimate posterior features. The posterior modes of T1, T2, λ1, λ2, λ3, , , were 53.2 and 248.2 days; 0.575, -0.406, 0.797 and 0.702, 34.63 and 0.0455 kg2, respectively. The residual variance predicted using these point estimates were 2.64, 6.88, 3.59 and 4.35 kg2 at days of milking 10, 53, 248 and 305, respectively. This technique requires less restrictive assumptions and the model has fewer parameters than other methods proposed to account for the heterogeneity of residual variance during lactation.

  14. Further results on variances of local stereological estimators

    DEFF Research Database (Denmark)

    Pawlas, Zbynek; Jensen, Eva B. Vedel

    2006-01-01

    In the present paper the statistical properties of local stereological estimators of particle volume are studied. It is shown that the variance of the estimators can be decomposed into the variance due to the local stereological estimation procedure and the variance due to the variability in the...... particle population. It turns out that these two variance components can be estimated separately, from sectional data. We present further results on the variances that can be used to determine the variance by numerical integration for particular choices of particle shapes....

  15. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    Science.gov (United States)

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level

  16. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  17. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

    2009-08-15

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  18. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    International Nuclear Information System (INIS)

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  19. Technical note: An improved estimate of uncertainty for source contribution from effective variance Chemical Mass Balance (EV-CMB) analysis

    Science.gov (United States)

    Shi, Guo-Liang; Zhou, Xiao-Yu; Feng, Yin-Chang; Tian, Ying-Ze; Liu, Gui-Rong; Zheng, Mei; Zhou, Yang; Zhang, Yuan-Hang

    2015-01-01

    The CMB (Chemical Mass Balance) 8.2 model released by the USEPA is a commonly used receptor model that can determine estimated source contributions and their uncertainties (called default uncertainty). In this study, we propose an improved CMB uncertainty for the modeled contributions (called EV-LS uncertainty) by adding the difference between the modeled and measured values for ambient species concentrations to the default CMB uncertainty, based on the effective variance least squares (EV-LS) solution. This correction reconciles the uncertainty estimates for EV and OLS regression. To verify the formula for the EV-LS CMB uncertainty, the same ambient datasets were analyzed using the equation we developed for EV-LS CMB uncertainty and a standard statistical package, SPSS 16.0. The same results were obtained by both ways indicate that the equation for EV-LS CMB uncertainty proposed here is acceptable. In addition, four ambient datasets were studies by CMB 8.2 and the source contributions as well as the associated uncertainties were obtained accordingly.

  20. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available...... variance. Our findings suggest that the empirical path of quadratic variation is also estimated better with the realized range-based variance....

  1. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...

  2. The use of analysis of variance to evaluate the influence of two factors - clay and radionuclide- in the sorption coefficients of Freundlich

    International Nuclear Information System (INIS)

    A large number of waste radioactive disposal operators use engineered barrier system in near surface and deep repository for the protection of humans and of the environmental from the potential hazards associated with this kind of waste. Clays are often considered as buffer and backfilled materials in the multi barrier concept of both high-level and low/intermediate-level radioactive waste repository. Several studies showed that this material present high sorption and exchange cationic capacity, but is important to evaluate if the sorption coefficients is influenced by kind of clay and/or by kind of radionuclide. Therefore, the objective of this research was to evaluate if this influence exist, considering clay and radionuclide like two factors and the sorption coefficients like response, determined by Freundlich Model, through of application of a statistical analysis known like Analysis of Variance for a two-factor model with one observation per cell. In this design of this experiment were analyzed four different clays (two bentonites, one kaolinite and one vermiculite) and two radionuclides (cesium and strontium). The statistical test for nonadditivity showed that there is no evidence of interaction between this two factors and only the kind of clay have a significant effect in the sorption coefficient. (author)

  3. 40 CFR 142.43 - Disposition of a variance request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Disposition of a variance request. 142... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.43 Disposition of a variance request. (a) If...

  4. 40 CFR 142.42 - Consideration of a variance request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Consideration of a variance request... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.42 Consideration of a variance request. (a)...

  5. 31 CFR 8.59 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...

  6. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that...... does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending on...... parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  7. 31 CFR 10.67 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...

  8. The Effect of Selection on the Phenotypic Variance

    OpenAIRE

    Shnol, E.E.; Kondrashov, A S

    1993-01-01

    We consider the within-generation changes of phenotypic variance caused by selection w(x) which acts on a quantitative trait x. If before selection the trait has Gaussian distribution, its variance decreases if the second derivative of the logarithm of w(x) is negative for all x, while if it is positive for all x, the variance increases.

  9. Semiparametric bounds of mean and variance for exotic options

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Finding semiparametric bounds for option prices is a widely studied pricing technique.We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic(Collar and Gap) call options given mean and variance information on the underlying asset price.Mathematically,we extended domination technique by quadratic functions to bound mean and variances.

  10. Semiparametric bounds of mean and variance for exotic options

    Institute of Scientific and Technical Information of China (English)

    LIU GuoQing; LI V.Wenbo

    2009-01-01

    Finding semiparametric bounds for option prices is a widely studied pricing technique. We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic (Collar and Gap) call options given mean and variance information on the underlying asset price. Mathematically, we extended domination technique by quadratic functions to bound mean and variances.

  11. A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del

    Institute of Scientific and Technical Information of China (English)

    Hou Ying-li; Liu Guo-xin; Jiang Chun-lan

    2015-01-01

    In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the efficient strategies and efficient frontier) are derived explicitly.

  12. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in k......PCA. As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  13. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  14. Empirical Performance of the Constant Elasticity Variance Option Pricing Model

    OpenAIRE

    Ren-Raw Chen; Cheng-Few Lee; Han-Hsing Lee

    2009-01-01

    In this essay, we empirically test the Constant–Elasticity-of-Variance (CEV) option pricing model by Cox (1975, 1996) and Cox and Ross (1976), and compare the performances of the CEV and alternative option pricing models, mainly the stochastic volatility model, in terms of European option pricing and cost-accuracy based analysis of their numerical procedures.In European-style option pricing, we have tested the empirical pricing performance of the CEV model and compared the results with those ...

  15. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

    International Nuclear Information System (INIS)

    In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000. -- Highlights: ► The efficiency of APR and APRS methods was compared to two tallying methods. ► The APRS is more efficient than the APR method in track length estimator tallies. ► In the energy deposition tally, both methods have nearly the same efficiency. ► Variance reduction factors of these methods are position and energy dependent.

  16. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    Science.gov (United States)

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  17. Are we underestimating the genetic variances of dimorphic traits?

    OpenAIRE

    Wolak, ME; Roff, DA; Fairbairn, DJ

    2015-01-01

    © 2014 The Authors. Populations often contain discrete classes or morphs (e.g., sexual dimorphisms, wing dimorphisms, trophic dimorphisms) characterized by distinct patterns of trait expression. In quantitative genetic analyses, the different morphs can be considered as different environments within which traits are expressed. Genetic variances and covariances can then be estimated independently for each morph or in a combined analysis. In the latter case, morphs can be considered as separate...

  18. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    International Nuclear Information System (INIS)

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered

  19. A randomization-based perspective of analysis of variance: a test statistic robust to treatment effect heterogeneity

    OpenAIRE

    Ding, Peng; Dasgupta, Tirthankar

    2016-01-01

    Fisher randomization tests for Neyman's null hypothesis of no average treatment effects are considered in a finite population setting associated with completely randomized experiments with more than two treatments. The consequences of using the F statistic to conduct such a test are examined both theoretically and computationally, and it is argued that under treatment effect heterogeneity, use of the F statistic can severely inflate the type I error of the Fisher randomization test. An altern...

  20. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However,…

  1. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  2. Monte-Carlo analysis of rarefied-gas diffusion including variance reduction using the theory of Markov random walks

    Science.gov (United States)

    Perlmutter, M.

    1973-01-01

    Molecular diffusion through a rarefied gas is analyzed by using the theory of Markov random walks. The Markov walk is simulated on the computer by using random numbers to find the new states from the appropriate transition probabilities. As the sample molecule during its random walk passes a scoring position, which is a location at which the macroscopic diffusing flow variables such as molecular flux and molecular density are desired, an appropriate payoff is scored. The payoff is a function of the sample molecule velocity. For example, in obtaining the molecular flux across a scoring position, the random walk payoff is the net number of times the scoring position has been crossed in the positive direction. Similarly, when the molecular density is required, the payoff is the sum of the inverse velocity of the sample molecule passing the scoring position. The macroscopic diffusing flow variables are then found from the expected payoff of the random walks.

  3. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar)

    OpenAIRE

    Sonesson, Anna K.; Ødegård, Jørgen; Rönnegård, Lars

    2013-01-01

    BACKGROUND: Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. RESULTS: Analysis of bo...

  4. Uncertainty analysis for 3D geological modeling using the Kriging variance

    Science.gov (United States)

    Choi, Yosoon; Choi, Younjung; Park, Sebeom; Um, Jeong-Gi

    2014-05-01

    The credible estimation of geological properties is critical in many geosciences fields including the geotechnical engineering, environmental engineering, mining engineering and petroleum engineering. Many interpolation techniques have been developed to estimate the geological properties from limited sampling data such as borehole logs. The Kriging is an interpolation technique that gives the best linear unbiased prediction of the intermediate values. It also provides the Kriging variance which quantifies the uncertainty of the kriging estimates. This study provides a new method to analyze the uncertainty in 3D geological modeling using the Kriging variance. The cut-off values determined by the Kriging variance were used to effectively visualize the 3D geological models with different confidence levels. This presentation describes the method for uncertainty analysis and a case study which evaluates the amount of recoverable resources by considering the uncertainty.

  5. Application of an iterative methodology for cross-section and variance/covariance data adjustment to the analysis of fast spectrum systems accounting for non-linearity

    International Nuclear Information System (INIS)

    until convergence is reached for the analytical values and their uncertainties. An important result of the study is that the asymptotic analytical values of the integral parameters are closer to the experimental values as compared to the standard first adjustment results. Moreover, the asymptotic analytical values seem rather independent of the specific a priori variance/covariance data used in the analysis, namely COMMARA-2.0 or BOLNA, despite different a priori analytical values respectively obtained with JEFF-3.1 or ENDF/B-VI.8 data. The asymptotic uncertainties obtained on the basis of the two libraries are also similar

  6. Multivariate Analysis of Variance: Finding significant growth in mice with craniofacial dysmorphology caused by the Crouzon mutation

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Ólafsdóttir, Hildur; Darvann, Tron Andre;

    2010-01-01

    Crouzon syndrome is characterized by growth disturbances caused by premature fusion of the cranial growth zones. A mouse model with mutation Fgfr2C342Y, equivalent to the most common Crouzon syndrome mutation (henceforth called the Crouzon mouse model), has a phenotype showing many parallels to t...

  7. Variance component score test for time-course gene set analysis of longitudinal RNA-seq data

    OpenAIRE

    Agniel, Denis; Hejblum, Boris P.

    2016-01-01

    As gene expression measurement technology is shifting from microarrays to sequencing, the statistical tools available for analyzing corresponding data require substantial modifications since RNA-seq data are measured as counts. Recently, it has been proposed to tackle the count nature of these data by modeling log-count reads per million as continuous variables, using nonparametric regression to account for their inherent heteroscedasticity. Adopting such a framework, we propose tcgsaseq, a p...

  8. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D; Noguera, J L

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those on...... environmental variance have an important influence upon the future economic performance of selected individuals...

  9. Estimation of the Conditional Variance in Paired Experiments

    OpenAIRE

    Abadie, Alberto; Guido W. IMBENS

    2008-01-01

    In paired randomized experiments units are grouped in pairs, often based on covariate information, with random assignment within the pairs. Average treatment effects are then estimated by averaging the within-pair differences in outcomes. Typically the variance of the average treatment effect estimator is estimated using the sample variance of the within-pair differences. However, conditional on the covariates the variance of the average treatment effect estimator may be substantially smaller...

  10. Analysis of speech-related variance in rapid event-related fMRI using a time-aware acquisition system.

    Science.gov (United States)

    Mehta, S; Grabowski, T J; Razavi, M; Eaton, B; Bolinger, L

    2006-02-15

    Speech production introduces signal changes in fMRI data that can mimic or mask the task-induced BOLD response. Rapid event-related designs with variable ISIs address these concerns by minimizing the correlation of task and speech-related signal changes without sacrificing efficiency; however, the increase in residual variance due to speech still decreases statistical power and must be explicitly addressed primarily through post-processing techniques. We investigated the timing, magnitude, and location of speech-related variance in an overt picture naming fMRI study with a rapid event-related design, using a data acquisition system that time-stamped image acquisitions, speech, and a pneumatic belt signal on the same clock. Using a spectral subtraction algorithm to remove scanner gradient noise from recorded speech, we related the timing of speech, stimulus presentation, chest wall movement, and image acquisition. We explored the relationship of an extended speech event time course and respiration on signal variance by performing a series of voxelwise regression analyses. Our results demonstrate that these effects are spatially heterogeneous, but their anatomic locations converge across subjects. Affected locations included basal areas (orbitofrontal, mesial temporal, brainstem), areas adjacent to CSF spaces, and lateral frontal areas. If left unmodeled, speech-related variance can result in regional detection bias that affects some areas critically implicated in language function. The results establish the feasibility of detecting and mitigating speech-related variance in rapid event-related fMRI experiments with single word utterances. They further demonstrate the utility of precise timing information about speech and respiration for this purpose. PMID:16412665

  11. Fractal Fluctuations and Quantum-Like Chaos in the Brain by Analysis of Variability of Brain Waves: A New Method Based on a Fractal Variance Function and Random Matrix Theory

    CERN Document Server

    Conte, E; Federici, A; Zbilut, J P

    2007-01-01

    We developed a new method for analysis of fundamental brain waves as recorded by EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given.

  12. Variance bias analysis for the Gelbard's batch method

    International Nuclear Information System (INIS)

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it

  13. Modeling variance structure of body shape traits of Lipizzan horses.

    Science.gov (United States)

    Kaps, M; Curik, I; Baban, M

    2010-09-01

    Heterogeneity of variance of growth traits over age is a common issue in estimating genetic parameters and is addressed in this study by selecting appropriate variance structure models for additive genetic and environmental variances. Modeling and partitioning those variances connected with analyzing small data sets were demonstrated on Lipizzan horses. The following traits were analyzed: withers height, chest girth, and cannon bone circumference. The measurements were taken at birth, and at approximately 6, 12, 24, and 36 mo of age of 660 Lipizzan horses born in Croatia between 1948 and 2000. The corresponding pedigree file consisted of 1,458 horses. Sex, age of dam, and stud-year-season interaction were considered fixed effects; additive genetic and permanent environment effects were defined as random. Linear adjustments of age at measuring were done within measuring groups. Maternal effects were included only for measurements taken at birth and at 6 mo. Additive genetic variance structures were modeled by using uniform structures or structures based on polynomial random regression. Environmental variance structures were modeled by using one of the following models: unstructured, exponential, Gaussian, or combinations of identity or diagonal with structures based on polynomial random regression. The parameters were estimated by using REML. Comparison and fits of the models were assessed by using Akaike and Bayesian information criteria, and by checking graphically the adequacy of the shape of the overall (phenotypic) and component (additive genetic and environmental) variance functions. The best overall fit was obtained from models with unstructured error variance. Compared with the model with uniform additive genetic variance, models with structures based on random regression only slightly improved overall fit. Exponential and Gaussian models were generally not suitable because they do not accommodate adequately heterogeneity of variance. Using the unstructured

  14. Mean-variance analysis with REITs in mixed asset portfolios: The return interval and the time period used for the estimation of inputs

    OpenAIRE

    Doug Waggle; Gisung Moon

    2006-01-01

    Purpose –Aims to test to determine whether the selection of the historical return time interval (monthly, quarterly, semiannual, or annual) used for calculating real estate investment trust (REIT) returns has a significant effect on optimal portfolio allocations Design/methodology/approach - Using a mean-variance utility function, optimal allocations to portfolios of stocks, bonds, bills, and REITs across different levels of assumed investor risk aversion are calculated. The average historica...

  15. Encoding of natural sounds by variance of the cortical local field potential.

    Science.gov (United States)

    Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A; David, Stephen V

    2016-06-01

    Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594

  16. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  17. Research on variance of subnets in network sampling

    Institute of Scientific and Technical Information of China (English)

    Qi Gao; Xiaoting Li; Feng Pan

    2014-01-01

    In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.

  18. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  19. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  20. Predicting the variance of a measurement with 1/f noise

    CERN Document Server

    Lenoir, Benjamin

    2013-01-01

    Measurement devices always add noise to the signal of interest and it is necessary to evaluate the variance of the results. This article focuses on stationary random processes whose Power Spectrum Density is a power law of frequency. For flicker noise, behaving as $1/f$ and which is present in many different phenomena, the usual way to compute the variance leads to infinite values. This article proposes an alternative definition of the variance which takes into account the fact that measurement devises need to be calibrated. This new variance, which depends on the calibration duration, the measurement duration and the duration between the calibration and the measurement, allows avoiding infinite values when computing the variance of a measurement.

  1. Confidence Intervals of Variance Functions in Generalized Linear Model

    Institute of Scientific and Technical Information of China (English)

    Yong Zhou; Dao-ji Li

    2006-01-01

    In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.

  2. Cost-variance analysis by DRGs; a technique for clinical budget analysis.

    Science.gov (United States)

    Voss, G B; Limpens, P G; Brans-Brabant, L J; van Ooij, A

    1997-02-01

    In this article it is shown how a cost accounting system based on DRGs can be valuable in determining changes in clinical practice and explaining alterations in expenditure patterns from one period to another. A cost-variance analysis is performed using data from the orthopedic department from the fiscal years 1993 and 1994. Differences between predicted and observed cost for medical care, such as diagnostic procedures, therapeutic procedures and nursing care are analyzed into different components: changes in patient volume, case-mix differences, changes in resource use and variations in cost per procedure. Using a DRG cost accounting system proved to be a useful technique for clinical budget analysis. Results may stimulate discussions between hospital managers and medical professionals to explain cost variations integrating medical and economic aspects of clinical health care. PMID:10165044

  3. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  4. Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model

    OpenAIRE

    Leunglung Chan; Eckhard Platen

    2015-01-01

    This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.

  5. Allan Variance Analysis as Useful Tool to Determine Noise in Various Single-Molecule Setups

    CERN Document Server

    Czerwinski, Fabian; Selhuber-Unkel, Christine; Oddershede, Lene B; 10.1117/12.827975

    2009-01-01

    One limitation on the performance of optical traps is the noise inherently present in every setup. Therefore, it is the desire of most experimentalists to minimize and possibly eliminate noise from their optical trapping experiments. A step in this direction is to quantify the actual noise in the system and to evaluate how much each particular component contributes to the overall noise. For this purpose we present Allan variance analysis as a straightforward method. In particular, it allows for judging the impact of drift which gives rise to low-frequency noise, which is extremely difficult to pinpoint by other methods. We show how to determine the optimal sampling time for calibration, the optimal number of data points for a desired experiment, and we provide measurements of how much accuracy is gained by acquiring additional data points. Allan variances of both micrometer-sized spheres and asymmetric nanometer-sized rods are considered.

  6. MULTILEVEL MODELING OF THE PERFORMANCE VARIANCE

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  7. A further analysis for the minimum-variance deconvolution filter performance

    Science.gov (United States)

    Chi, Chong-Yung

    1987-01-01

    Chi and Mendel (1984) analyzed the performance of minimum-variance deconvolution (MVD). In this correspondence, a further analysis of the performance of the MVD filter is presented. It is shown that the MVD filter performs like an inverse filter and a whitening filter as SNR goes to infinity, and like a matched filter as SNR goes to zero. The estimation error of the MVD filter is colored noise, but it becomes white when SNR goes to zero. This analysis also conects the error power-spectral density of the MVD filter with the spectrum of the causal-prediction error filter.

  8. Low variance at large scales of WMAP 9 year data

    International Nuclear Information System (INIS)

    We use an optimal estimator to study the variance of the WMAP 9 CMB field at low resolution, in both temperature and polarization. Employing realistic Monte Carlo simulation, we find statistically significant deviations from the ΛCDM model in several sky cuts for the temperature field. For the considered masks in this analysis, which cover at least the 54% of the sky, the WMAP 9 CMB sky and ΛCDM are incompatible at ≥ 99.94% C.L. at large angles ( > 5°). We find instead no anomaly in polarization. As a byproduct of our analysis, we present new, optimal estimates of the WMAP 9 CMB angular power spectra from the WMAP 9 year data at low resolution

  9. Combining analysis of variance and three‐way factor analysis methods for studying additive and multiplicative effects in sensory panel data

    DEFF Research Database (Denmark)

    Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun

    2015-01-01

    Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...... taking into account the individual differences among assessors. In particular, we will be interested in considering the multiplicative assessor model, which explicitly models the different usage of scale. A multivariate generalization of the model will be proposed, which allows to analyse the differences...

  10. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available......, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We solve this...... problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared with realized...

  11. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is...... available, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of the realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We...... solve this problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared...

  12. Unemployment A Variance Decomposition of Index-Linked Bond Returns

    OpenAIRE

    Francis Breedon

    2012-01-01

    We undertake a variance decomposition of index-linked bond returns for the US, UK and Iceland. In all cases, news about future excess returns is the key driver though only for Icelandic bonds are returns independent of inflation.

  13. A new definition of nonlinear statistics mean and variance

    OpenAIRE

    Chen, W.,

    1999-01-01

    This note presents a new definition of nonlinear statistics mean and variance to simplify the nonlinear statistics computations. These concepts aim to provide a theoretical explanation of a novel nonlinear weighted residual methodology presented recently by the present author.

  14. Occupancy, spatial variance, and the abundance of species

    OpenAIRE

    He, F.; Gaston, K J

    2003-01-01

    A notable and consistent ecological observation known for a long time is that spatial variance in the abundance of a species increases with its mean abundance and that this relationship typically conforms well to a simple power law (Taylor 1961). Indeed, such models can be used at a spectrum of spatial scales to describe spatial variance in the abundance of a single species at different times or in different regions and of different species across the same set of areas (Tayl...

  15. Valuation of Variance Forecast with Simulated Option Markets

    OpenAIRE

    Engle, Robert F; Che-Hsiung Hong; Alex Kane

    1990-01-01

    An appropriate metric for the success of an algorithm to forecast the variance of the rate of return on a capital asset could be the incremental profit from substituting it for the next best alternative. We propose a framework to assess incremental profits for competing algorithms to forecast the variance of a prespecified asset. The test is based on the return history of the asset in question. A hypothetical insurance market is set up, where competing forecasting algorithms are used. One alg...

  16. Adaptive Estimation of Autoregressive Models with Time-Varying Variances

    OpenAIRE

    Ke-Li Xu; Phillips, Peter C. B.

    2006-01-01

    Stable autoregressive models of known finite order are considered with martingale differences errors scaled by an unknown nonparametric time-varying function generating heterogeneity. An important special case involves structural change in the error variance, but in most practical cases the pattern of variance change over time is unknown and may involve shifts at unknown discrete points in time, continuous evolution or combinations of the two. This paper develops kernel-based estimators of th...

  17. A characterization of Poisson-Gaussian families by generalized variance

    OpenAIRE

    Kokonendji, Célestin C.; Masmoudi, Afif

    2006-01-01

    We show that if the generalized variance of an infinitely divisible natural exponential family [math] in a [math] -dimensional linear space is of the form [math] , then there exists [math] in [math] such that [math] is a product of [math] univariate Poisson and ( [math] )-variate Gaussian families. In proving this fact, we use a suitable representation of the generalized variance as a Laplace transform and the result, due to Jörgens, Calabi and Pogorelov, that any strictly convex smooth funct...

  18. Testing hypothesis on stability of expected value and variance

    OpenAIRE

    Grzegorz Konczak; Janusz Wywial

    2006-01-01

    The simple samples are independently taken from normal distribution. The two functions of the sample means and sample variances are considered. The density functions of these two statistics have been derived. These statistics can be applied for verifying the hypothesis on stability of expected value and variance of normal distribution considered, e.g., in statistical process control. The critical values for these statistics have been found using numerical integration. The tables with approxim...

  19. Analytic variance estimates of Swank and Fano factors

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-07-15

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.

  20. Data Warehouse Designs Achieving ROI with Market Basket Analysis and Time Variance

    CERN Document Server

    Silvers, Fon

    2011-01-01

    Market Basket Analysis (MBA) provides the ability to continually monitor the affinities of a business and can help an organization achieve a key competitive advantage. Time Variant data enables data warehouses to directly associate events in the past with the participants in each individual event. In the past however, the use of these powerful tools in tandem led to performance degradation and resulted in unactionable and even damaging information. Data Warehouse Designs: Achieving ROI with Market Basket Analysis and Time Variance presents an innovative, soup-to-nuts approach that successfully

  1. Wild bootstrap of the mean in the infinite variance case

    OpenAIRE

    Giuseppe Cavaliere; Iliyan Georgiev; Robert Taylor, A. M.

    2011-01-01

    It is well known that the standard i.i.d. bootstrap of the mean is inconsistent in a location model with infinite variance (alfa-stable) innovations. This occurs because the bootstrap distribution of a normalised sum of infinite variance random variables tends to a random distribution. Consistent bootstrap algorithms based on subsampling methods have been proposed but have the drawback that they deliver much wider confidence sets than those generated by the i.i.d. bootstrap owing to the fact ...

  2. Testing instantaneous causality in presence of non constant unconditional variance

    OpenAIRE

    Gianetto, Quentin Giai; Raissi, Hamdi

    2012-01-01

    The problem of testing instantaneous causality between variables with time-varying unconditional variance is investigated. It is shown that the classical tests based on the assumption of stationary processes must be avoided in our non standard framework. More precisely we underline that the standard test does not control the type I errors, while the tests with White (1980) and Heteroscedastic Autocorrelation Consistent (HAC) corrections can suffer from a severe loss of power when the variance...

  3. CLTs and asymptotic variance of time-sampled Markov chains

    CERN Document Server

    Latuszynski, Krzysztof

    2011-01-01

    For a Markov transition kernel $P$ and a probability distribution $ \\mu$ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel $P_{\\mu} = \\sum_k \\mu(k)P^k.$ In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker's and Metropolis algorithms in terms of asymptotic variance.

  4. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    Science.gov (United States)

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering. PMID:27093626

  5. Assessment of the genetic variance of late-onset Alzheimer's disease.

    Science.gov (United States)

    Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K

    2016-05-01

    Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions. PMID:27036079

  6. Experience with Monte Carlo variance reduction using adjoint solutions in HYPER neutronics analysis

    International Nuclear Information System (INIS)

    The variance reduction techniques using adjoint solutions are applied to the Monte Carlo calculation of the HYPER(HYbrid Power Extraction Reactor) core neutronics. The applied variance reduction techniques are the geometry splitting and the weight windows. The weight bounds and the cell importance needed for these techniques are generated from an adjoint discrete ordinate calculation by the two-dimensional TWODANT code. The flux distribution variances of the Monte Carlo calculations by these variance reduction techniques are compared with the results of the standard Monte Carlo calculations. It is shown that the variance reduction techniques using adjoint solutions to the HYPER core neutronics result in a decrease in the efficiency of the Monte Carlo calculation

  7. Simultaneous analysis of large INTEGRAL/SPI datasets: optimizing the computation of the solution and its variance using sparse matrix algorithms

    CERN Document Server

    Bouchet, L; Buttari, A; Rouet, F -H; Chauvin, M

    2013-01-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X-gamma-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amoun...

  8. Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise

    Institute of Scientific and Technical Information of China (English)

    Donghui Li; Li Guo

    2006-01-01

    @@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.

  9. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm is...... genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable of...... detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted to...

  10. Variance and efficiency of contribution Monte Carlo

    International Nuclear Information System (INIS)

    The game of contribution is compared with the game of splitting in radiation transport using numerical results obtained by solving the set of coupled integral equations for first and second moments around the score. The splitting game is found superior. (author)

  11. Probability of the residual wavefront variance of an adaptive optics system and its application.

    Science.gov (United States)

    Huang, Jian; Liu, Chao; Deng, Ke; Yao, Zhousi; Xian, Hao; Li, Xinyang

    2016-02-01

    For performance evaluation of an adaptive optics (AO) system, the probability of the system residual wavefront variance can provide more information than the wavefront variance average. By studying the Zernike coefficients of an AO system residual wavefront, we derived the exact expressions for the probability density functions of the wavefront variance and the Strehl ratio, for instantaneous and long-term exposures owing to the insufficient control loop bandwidth of the AO system. Our calculations agree with the residual wavefront data of a closed loop AO system. Using these functions, we investigated the relationship between the AO system bandwidth and the distribution of the residual wavefront variance. Additionally, we analyzed the availability of an AO system for evaluating the AO performance. These results will assist in designing and probabilistic analysis of AO systems. PMID:26906850

  12. Estimating the Variance of Design Parameters

    Science.gov (United States)

    Hedberg, E. C.; Hedges, L. V.; Kuyper, A. M.

    2015-01-01

    Randomized experiments are generally considered to provide the strongest basis for causal inferences about cause and effect. Consequently randomized field trials have been increasingly used to evaluate the effects of education interventions, products, and services. Populations of interest in education are often hierarchically structured (such as…

  13. Time Variance of the Suspension Nonlinearity

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.; Pedersen, Bo Rohde

    2008-01-01

    It is well known that the resonance frequency of a loudspeaker depends on how it is driven before and during the measurement. Measurement done right after exposing it to high levels of electrical power and/or excursion giver lower values than what can be measured when the speaker is cold. This pa...

  14. Partitioning of genomic variance using biological pathways

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    basis of SNP-data and trait phenotypes and can account for a much larger fraction of the heritable component. A disadvantage is that this “black-box” modelling approach conceals the biological mechanisms underlying the trait. We propose to open the “black-box” by building SNP-set genomic models that...

  15. Variance squeezing and entanglement of the XX central spin model

    International Nuclear Information System (INIS)

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  16. Estimation of variance components and genetic parameters for type traits and milk yield in Holstein cattle

    OpenAIRE

    DURU, Serdar; KUMLU, Salahattin; TUNCEL, Erdoğan

    2012-01-01

    This research was conducted to estimate variance components and genetic parameters for type traits and milk yield in Holstein-Friesian cattle. In this study, 597 daughters of 158 sires in 128 herds were classified. In the data analysis, type scores for 354 daughters bred within 70 herds sired by 46 sires that had at least 3 daughters, and 304 lactation records for 206 daughters within 56 herds sired by 37 sires, were considered. For estimation of variance components and correlations among the...

  17. Further Development of the Variance-Covariance Method

    OpenAIRE

    J. Chen; Breckow, Joachim; Roos, H; Kellerer, Albrecht M.

    1990-01-01

    Applications of the variance-covariance technique are presented that illustrate the potential of the method. The dose mean lineal energy, yD, can be determined in time-varying radiation fields where the fluctuations of the dose rate are substantially in excess of the stochastic fluctuations of the energy imparted. An added advantage is, that yD is little influenced by noise that affects both detectors simultaneously. The variance-covariance method is thus stable with respect to dose rate fluc...

  18. Statistical test of reproducibility and operator variance in thin-section modal analysis of textures and phenocrysts in the Topopah Spring member, drill hole USW VH-2, Crater Flat, Nye County, Nevada

    International Nuclear Information System (INIS)

    A thin-section operator-variance test was given to the 2 junior authors, petrographers, by the senior author, a statistician, using 16 thin sections cut from core plugs drilled by the US Geological Survey from drill hole USW VH-2 standard (HCQ) drill core. The thin sections are samples of Topopah Spring devitrified rhyolite tuff from four textural zones, in ascending order: (1) lower nonlithophysal, (2) lower lithopysal, (3) middle nonlithophysal, and (4) upper lithophysal. Drill hole USW-VH-2 is near the center of the Crater Flat, about 6 miles WSW of the Yucca Mountain in Exploration Block. The original thin-section labels were opaqued out with removable enamel and renumbered with alpha-numeric labels. The sliders were then given to the petrographer operators for quantitative thin-section modal (point-count) analysis of cryptocrystalline, spherulitic, granophyric, and void textures, as well as phenocryst minerals. Between operator variance was tested by giving the two petrographers the same slide, and within-operator variance was tested by the same operator the same slide to count in a second test set, administered at least three months after the first set. Both operators were unaware that they were receiving the same slide to recount. 14 figs., 6 tabs

  19. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting in a...... general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  20. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  1. An Efficient SDN Load Balancing Scheme Based on Variance Analysis for Massive Mobile Users

    Directory of Open Access Journals (Sweden)

    Hong Zhong

    2015-01-01

    Full Text Available In a traditional network, server load balancing is used to satisfy the demand for high data volumes. The technique requires large capital investment while offering poor scalability and flexibility, which difficultly supports highly dynamic workload demands from massive mobile users. To solve these problems, this paper analyses the principle of software-defined networking (SDN and presents a new probabilistic method of load balancing based on variance analysis. The method can be used to dynamically manage traffic flows for supporting massive mobile users in SDN networks. The paper proposes a solution using the OpenFlow virtual switching technology instead of the traditional hardware switching technology. A SDN controller monitors data traffic of each port by means of variance analysis and provides a probability-based selection algorithm to redirect traffic dynamically with the OpenFlow technology. Compared with the existing load balancing methods which were designed to support traditional networks, this solution has lower cost, higher reliability, and greater scalability which satisfy the needs of mobile users.

  2. Variance of surface area estimators using spatial grids of lines

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří; Kubínová, Lucie

    Vol. 2. Kraków : Polish Society for Stereology, 2005 - (Chrapoński, J.; Cwajna, J.; Wojnar, L.), s. 252-256 ISBN 83-917834-4-8. [European Congress on Stereology and Image Analysis /9./ and International Conference on Stereology and Image Analysis in Materials Science STERMAT /7./. Zakopane (PL), 10.05.2005-13.05.2005] R&D Projects: GA AV ČR(CZ) IAA100110502; GA AV ČR(CZ) IAA600110507; GA ČR(CZ) GA304/05/0153 Institutional research plan: CEZ:AV0Z50110509 Keywords : variance * stereology * surface area Subject RIV: BA - General Mathematics

  3. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these...

  4. Unbiased Estimates of Variance Components with Bootstrap Procedures

    Science.gov (United States)

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  5. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    Science.gov (United States)

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic covariances estimated from the model accounting for heterogeneous variances resulted in genetic

  6. Variance in the reproductive success of dominant male mountain gorillas.

    Science.gov (United States)

    Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M

    2014-10-01

    Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species. PMID:24818867

  7. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  8. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    Science.gov (United States)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  9. Avoiding Aliasing in Allan Variance: an Application to Fiber Link Data Analysis

    CERN Document Server

    Calosso, Claudio E; Micalizio, Salvatore

    2015-01-01

    Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284 km coherent optical link for frequency dissemination, which we realized in Italy.

  10. Avoiding Aliasing in Allan Variance: An Application to Fiber Link Data Analysis.

    Science.gov (United States)

    Calosso, Claudio E; Clivati, Cecilia; Micalizio, Salvatore

    2016-04-01

    Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance (AVAR) as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284-km coherent optical link for frequency dissemination, which we realized in Italy. PMID:26800534

  11. Estimation of model error variances during data assimilation

    Science.gov (United States)

    Dee, D.

    2003-04-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  12. 29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Variance of the bond/escrow and sale-contract requirements... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the Statutory Requirements § 4204.11 Variance of the bond/escrow and sale-contract requirements. (a)...

  13. Stream sampling for variance-optimal estimation of subset sums

    OpenAIRE

    Cohen, Edith; Duffield, Nick; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present an efficient reservoir sampling scheme, $\\varoptk$, that dominates all previous schemes in terms of estimation quality. $\\varoptk$ provides {\\em variance optimal unbiased estimation of subset sum...

  14. 40 CFR 142.21 - State consideration of a variance or exemption request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State consideration of a variance or... State-Issued Variances and Exemptions § 142.21 State consideration of a variance or exemption request. A State with primary enforcement responsibility shall act on any variance or exemption request...

  15. Variance Risk Premia

    OpenAIRE

    Peter Carr; Liuren Wu

    2004-01-01

    We propose a direct and robust method for quantifying the variance risk premium on financial assets. We theoretically and numerically show that the risk-neutral expected value of the return variance, also known as the variance swap rate, is well approximated by the value of a particular portfolio of options. Ignoring the small approximation error, the difference between the realized variance and this synthetic variance swap rate quantifies the variance risk premium. Using a large options data...

  16. Explaining the Prevalence, Scaling and Variance of Urban Phenomena

    CERN Document Server

    Gomez-Lievano, Andres; Hausmann, Ricardo

    2016-01-01

    The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.

  17. Identifiability, stratification and minimum variance estimation of causal effects.

    Science.gov (United States)

    Tong, Xingwei; Zheng, Zhongguo; Geng, Zhi

    2005-10-15

    The weakest sufficient condition for the identifiability of causal effects is the weakly ignorable treatment assignment, which implies that potential responses are independent of treatment assignment in each fine subpopulation stratified by a covariate. In this paper, we expand the independence that holds in fine subpopulations to the case that the independence may also hold in several coarse subpopulations, each of which consists of several fine subpopulations and may have overlaps with other coarse subpopulations. We first show that the identifiability of causal effects occurs if and only if the coarse subpopulations partition the whole population. We then propose a principle, called minimum variance principle, which says that the estimator possessing the minimum variance is preferred, in dealing with the stratification and the estimation of the causal effects. The simulation results with the detail programming and a practical example demonstrate that it is a feasible and reasonable way to achieve our goals. PMID:16149123

  18. Variance computations for functional of absolute risk estimates

    OpenAIRE

    Pfeiffer, R. M.; E. Petracci

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function base...

  19. Convergence of Recursive Identification for ARMAX Process with Increasing Variances

    Institute of Scientific and Technical Information of China (English)

    JIN Ya; LUO Guiming

    2007-01-01

    The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.

  20. Constraining the local variance of H0 from directional analyses

    Science.gov (United States)

    Bengaly, C. A. P., Jr.

    2016-04-01

    We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.

  1. Variance optimal sampling based estimation of subset sums

    CERN Document Server

    Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

  2. Validation technique using mean and variance of kriging model

    International Nuclear Information System (INIS)

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling

  3. Use of replicated Latin hypercube sampling to estimate sampling variance in uncertainty and sensitivity analysis results for the geologic disposal of radioactive waste

    International Nuclear Information System (INIS)

    The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, used a Latin hypercube sample (LHS) of size 300 in the propagation of the epistemic uncertainty present in 392 analysis input variables. To assess the adequacy of this sample size, the 2008 YM PA was repeated with three independently generated (i.e., replicated) LHSs of size 300 from the indicated 392 input variables and their associated distributions. Comparison of the uncertainty and sensitivity analysis results obtained with the three replicated LHSs showed that the three samples lead to similar results and that the use of any one of three samples would have produced the same assessment of the effects and implications of epistemic uncertainty. Uncertainty and sensitivity analysis results obtained with the three LHSs were compared by (i) simple visual inspection, (ii) use of the t-distribution to provide a formal representation of sample-to-sample variability in the determination of expected values over epistemic uncertainty and other distributional quantities, and (iii) use of the top down coefficient of concordance to determine agreement with respect to the importance of individual variables indicated in sensitivity analyses performed with the replicated samples. The presented analyses established that an LHS of size 300 was adequate for the propagation and analysis of the effects and implications of epistemic uncertainty in the 2008 YM PA. - Highlights: ► Replicated Latin hypercube sampling in the 2008 Yucca Mountain (YM) performance assessment (PA) is described. ► Stability of uncertainty and sensitivity analysis results is demonstrated. ► Presented results establish that a sample of size 300 is adequate for the propagation of epistemic uncertainty in the 2008 YM PA.

  4. Recursive identification of time-varying systems using minimum variance

    OpenAIRE

    Tian, Y.; M. Wahl; Vasseur, C.

    2003-01-01

    This paper presents a new on-line identification method based on minimum variance using a sliding data window in order to improve robustness against noise and effective tracking ability. This method is based on a local linearization model between the sliding data window bounds and on an incremental procedure. The corresponding algorithm is applied to a non-linear fermentation process. The results illustrate the performances of this method in comparison with other existent techniques.

  5. Variance estimation for two-class and multi-class ROC analysis using operating point averaging

    Czech Academy of Sciences Publication Activity Database

    Paclík, P.; Lai, C.; Novovičová, Jana; Duin, R.P.W.

    Tampa : IEEE, 2008, s. 1-4. ISBN 978-1-4244-2174-9. [International Conference on Pattern Recognition 2008 /19./. Tampa (US), 08.12.2008-11.12.2008] Institutional research plan: CEZ:AV0Z10750506 Keywords : receiver operator characteristic analysis * classification Subject RIV: IN - Informatics, Computer Science http://library.utia.cas.cz/separaty/2008/RO/novovicova-variance estimation for two-class and multi-class roc analysis using operating point averaging.pdf

  6. Variances of the components and magnitude of the polar heliospheric magnetic field

    Science.gov (United States)

    Balogh, A.; Horbury, T. S.; Forsyth, R. J.; Smith, E. J.

    1995-01-01

    The heliolatitude dependences of the variances in the components and the magnitude of the heliospheric magnetic field have been analysed, using the Ulysses magnetic field observations from close to the ecliptic plane to 80 southern solar latitude. The normalized variances in the components of the field increased significantly (by a factor about 5) as Ulysses entered the purely polar flows from the southern coronal hole. At the same time, there was at most a small increase in the variance of the field magnitude. The analysis of the different components indicates that the power in the fluctuations is not isotropically distributed: most of the power is in the components of the field transverse to the radial direction. Examining the variances calculated over different time scales from minutes to hours shows that the anisotropy of the field variances is different on different scales, indicating the influence of the two distinct populations of fluctuations in the polar solar wind which have been previously identified. We discuss these results in terms of evolutionary, dynamic processes as a function of heliocentric distance and as a function of the large scale geometry of the magnetic field associated with the polar coronal hole.

  7. A generalization of Talagrand's variance bound in terms of influences

    CERN Document Server

    Kiss, Demeter

    2010-01-01

    Consider a random variable of the form f(X_1,...,X_n), where f is a deterministic function, and where X_1,...,X_n are i.i.d random variables. For the case where X_1 has a Bernoulli distribution, Talagrand (1994) gave an upper bound for the variance of f in terms of the individual influences of the variables X_i for i=1,...,n. We generalize this result to the case where X_1 takes finitely many vales.

  8. Real Variance Estimation of BEAVRS whole core benchmark in Monte Carlo Eigenvalue Calculations

    International Nuclear Information System (INIS)

    For whole core analysis by the MC eigenvalue mode calculations, some severe problems are encountered because these systems have higher dominance ratios (DRs) than a fuel assembly (FA) or critical facilities. It is well known that the apparent variance of a local tally like pin power is differ from the real variance considerably. In McCARD code, four approaches for the real variance estimation were implemented. This benchmark provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components with three-dimensional (3D) scale. In this study, we perform a real variance estimation of MC tally for the design parameter such as keff, pin fission power, FA-wise fission power for BEAVRS fresh core using McCARD. In addition, this paper presents a new method to estimate the real variance called history-based sampling method, briefly. In this study, the real variance estimations for the BEAVRS whole core benchmark were performed using Gelbard's batch method, Ueki's inter-cycle correction method, and Shim's HB method, which were implemented in McCARD. As expected, it was observed that the apparent variance of local MC tally estimate such as pin or FA-wise fission power tends to be smaller than its real variance while that of the global MC tally such as keff is comparable to the reference. To investigate the difference of the real to apparent variance ratio between global and local MC tally, the correlation coefficients between each pin or FA fission power are calculated using McCARD. Because the correlation coefficients between neighbor pins is near 1.0, the error by FSD inter-cycle correlation would be propagated. In addition, this paper presented a new variance estimation method called the HS method. The HS method has several advantages over the HB method. The HS method is very easy to implement into a existing MC code and it does not require additional parameters such as

  9. Budgeting and controllable cost variances. The case of multiple diagnoses, multiple services, and multiple resources.

    Science.gov (United States)

    Broyles, R W; Lay, C M

    1982-12-01

    This paper examines an unfavorable cost variance in an institution which employs multiple resources to provide stay specific and ancillary services to patients presenting multiple diagnoses. It partitions the difference between actual and expected costs into components that are the responsibility of an identifiable individual or group of individuals. The analysis demonstrates that the components comprising an unfavorable cost variance are attributable to factor prices, the use of real resources, the mix of patients, and the composition of care provided by the institution. In addition, the interactive effects of these factors are also identified. PMID:7183731

  10. Influence of genetic variance on sodium sensitivity of blood pressure.

    Science.gov (United States)

    Luft, F C; Miller, J Z; Weinberger, M H; Grim, C E; Daugherty, S A; Christian, J C

    1987-02-01

    To examine the effect of genetic variance on blood pressure, sodium homeostasis, and its regulatory determinants, we studied 37 pairs of monozygotic twins and 18 pairs of dizygotic twins under conditions of volume expansion and contraction. We found that, in addition to blood pressure and body size, sodium excretion in response to provocative maneuvers, glomerular filtration rate, the renin-angiotensin system, and the sympathetic nervous system are influenced by genetic variance. To elucidate the interaction of genetic factors and an environmental influence, namely, salt intake, we restricted dietary sodium in 44 families of twin children. In addition to a modest decrease in blood pressure, we found heterogeneous responses in blood pressure indicative of sodium sensitivity and resistance which were normally distributed. Strong parent-offspring resemblances were found in baseline blood pressures which persisted when adjustments were made for age and weight. Further, mother-offspring resemblances were observed in the change in blood pressure with sodium restriction. We conclude that the control of sodium homeostasis is heritable and that the change in blood pressure with sodium restriction is familial as well. These data speak to the interaction between the genetic susceptibility to hypertension and environmental influences which may result in its expression. PMID:3553721

  11. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  12. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  13. The efficiency of the crude oil markets: Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  14. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    Science.gov (United States)

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  15. Variance of the Quantum Dwell Time for a Nonrelativistic Particle

    Science.gov (United States)

    Hahne, Gerhard

    2012-01-01

    Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.

  16. 78 FR 2986 - Northern Indiana Public Service Company; Notice of Application for Temporary Variance of License...

    Science.gov (United States)

    2013-01-15

    ... Variance of License Article 403 and Soliciting Comments, Motions to Intervene and Protests Take notice that... inspection: a. Application Type: Extension of temporary variance of license article 403. b. Project No: 12514... Commission to grant an extension of time to a temporary variance of license Article 403 that was granted...

  17. Cosmic variance of the spectral index from mode coupling

    Science.gov (United States)

    Bramante, Joseph; Kumar, Jason; Nelson, Elliot; Shandera, Sarah

    2013-11-01

    We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δns| ~ Script O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.

  18. Realized Daily Variance of S&P 500 Cash Index: A Revaluation of Stylized Facts

    OpenAIRE

    Shirley J. Huang; Qianqiu Liu; Jun Yu

    2007-01-01

    In this paper the realized daily variance is obtained from intraday transaction prices of the S&P 500 cash index over the period from January 1993 to December 2004. When constructing realized daily variance, market microstructure noise is taken into account using a technique proposed by Zhang, Mykland and Ait-Sahalia (2005). The time series properties of realized daily variance are compared with those of variance estimates obtained from parametric GARCH and stochastic volatility models. Uncon...

  19. Application of Two-factor Variance Analysis Model in the Hall Measurement%双因素方差分析模型在霍尔测量中的应用

    Institute of Scientific and Technical Information of China (English)

    罗明海; 韩亚萍; 张凯; 侯纪伟; 王金鑫; 高雪连

    2011-01-01

    在测量半导体材料的霍尔效应实验中,研究霍尔电压在不同温度和不同磁场强度下变化情况。文章通过建立双因素方差分析数学模型对实验数据进行分析,判断不同的实验条件对实验结果是否有显著影响。%In the experiment of the measurement of semiconductor materials Hall effect,the Hall voltage changes at different temperatures and different magnetic field strengths are often studied,Hall voltage was always fluctuated.This article established the two-factor variance analysis mathematical models to analyze the experimental data,to determine whether the results of the experiment had significant effects under different experimental conditions.

  20. 77 FR 52711 - Appalachian Power; Notice of Temporary Variance of License and Soliciting Comments, Motions To...

    Science.gov (United States)

    2012-08-30

    ... Federal Energy Regulatory Commission Appalachian Power; Notice of Temporary Variance of License and...: Temporary Variance of License. b. Project No: 739-033. c. Date Filed: August 7, 2012. d. Applicant... filed. k. Description of Application: The licensee requests a temporary variance to allow for a...

  1. Constraining the local variance of $H_0$ from directional analyses

    CERN Document Server

    Bengaly, C A P

    2016-01-01

    We evaluate the local variance of the Hubble Constant $H_0$ with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison procedure to test whether the bulk flow motion can reconcile the measurement of the Hubble Constant $H_0$ from standard candles ($H_0 = 73.8 \\pm 2.4 \\; \\mathrm{km \\; s}^{-1}\\; \\mathrm{Mpc}^{-1}$) with that of the Planck's Cosmic Microwave Background data ($67.8 \\pm 0.9 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$). We obtain that $H_0$ ranges from $68.9 \\pm 0.5 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$ to $71.2 \\pm 0.7 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$ through the celestial sphere, with maximal dipolar anisotropy towards the $(l,b) = (315^{\\circ},27^{\\circ})$ direction. Interestingly, this result is in good agreement with both $H_0$ estimations, as well as the bulk flow direction reported in the literature. In addition, we assess the statistical significance of this variance with different prescriptions of Monte Carlo simulations, finding a goo...

  2. Simple Variance Swaps

    OpenAIRE

    Ian Martin

    2011-01-01

    The large asset price jumps that took place during 2008 and 2009 disrupted volatility derivatives markets and caused the single-name variance swap market to dry up completely. This paper defines and analyzes a simple variance swap, a relative of the variance swap that in several respects has more desirable properties. First, simple variance swaps are robust: they can be easily priced and hedged even if prices can jump. Second, simple variance swaps supply a more accurate measure of market-imp...

  3. [Spatial variance characters of urban synthesis pattern indices at different scales].

    Science.gov (United States)

    Yue, Wenze; Xu, Jianhua; Xu, Lihua; Tan, Wenqi; Mei, Anxin

    2005-11-01

    Scale holds the key to understand pattern-process interactions, and indeed, becomes one of the corner-stone concepts in landscape ecology. Geographic Information System and remote sensing techniques provide an effective tool to characterize the spatial pattern and spatial heterogeneity at different scales. As an example, these techniques are applied to analyze the urban landscape diversity index, contagion index and fractal dimension on the SPOT remote sensing images at four scales. This paper modeled the semivariogram of these three landscape indices at different scales, and the results indicated that the spatial variance characters of diversity index, contagion index and fractal dimension were similar at different scales, which was spatial dependence. The spatial dependence was showed at each scale, the smaller the scale, the stronger the spatial dependence. With the scale reduced, more details of spatial variance were discovered. The contribution of spatial autocorrelation of these three indices to total spatial variance increased gradually, but when the scale was quite small, spatial variance analysis would destroy the interior structure of landscape system. The semivariogram models of different landscape indices were very different at the same scale, illuminating that these models were incomparable at different scales. According to above analyses and based on the study of urban land use structure, 1 km extent was the more perfect scale for studying the spatial variance of urban landscape pattern in Shanghai. The spatial variance of landscape indices had the character of scale-dependence, and was a function of scale. The results differed at different scales we chose, and thus, the influence of scales on pattern could not be neglected in the research of landscape ecology. The changes of these three landscape indices displayed the regularity of urban spatial structure at different scales, i. e., they were complicated and no regularity at small scale, polycentric

  4. 42 CFR 456.524 - Notification of Administrator's action and duration of variance.

    Science.gov (United States)

    2010-10-01

    ... of variance. 456.524 Section 456.524 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.524 Notification of Administrator's action and duration...

  5. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Science.gov (United States)

    2010-04-28

    ...), and 74 FR 41742 (August 18, 2009)).\\1\\ \\1\\ Zurn Industries, Inc. received two permanent variances from OSHA. The first variance, granted on May 14, 1985 (50 FR 20145), addressed the boatswain's-chair... proposed alternatives (see 38 FR 8545 (April 3, 1973), 44 FR 51352 (August 31, 1979), 50 FR 20145 (May...

  6. Analysis of Efficiency Variances of Thermal Power Industry among China's Provinces with Undesirable Outputs Considered%考虑非理想产出的中国火电行业效率省际差异分析

    Institute of Scientific and Technical Information of China (English)

    曲茜茜; 解百臣; 殷可欣

    2012-01-01

    As the dominant generation form of China’s power industry,thermal power generates undesirable outputs such as carbon emissions,sulfur emissions and auxiliary power by using fossil fuels and labor.To coordinate with the global trend of sustainable development of economy and environment,this paper has analyzed the efficiency variances of thermal power industry among China’s thirty provinces from 2005 to 2009 with the SBM-DEA(Slack Based Measure-Data Envelopment Analysis:SBM-DEA)model,which considers undesirable outputs.Giving more information on slacks and production frontier,this approach can do further analysis on efficiency variances and evaluate the impacts of thermal power industry on regional sustainable development more objectively.In this paper,first,combining with national policies and power structures,we analyzed the reasons for efficiency variances as well as the impacts of resource endowment on thermal power industry;second,we sorted the provinces based on SBM efficiency of 2008 and discussed the dual prices of inputs and outputs;finally,under the assumption of sufficient power supply,we studied the critical factors that can improve the efficiency of thermal power industry and help to coordinate the development of resources and environment.Several valuable conclusions are as follows:1)With the consideration of undesirable outputs,the evaluation results could better satisfy the needs of harmonious and sustainable development of resources and environment;2)Policy reforms such as"establish high-power station while dismantling small plants"and "mine-mouth power plants"influence the efficiency of power systems,and some have achieved remarkable effects;3)Resources endowment determines power structures to a certain extent,which influences the efficiency of power systems as well as the economic development.In short,the efficiency variances of thermal power industry come from different aspects.To achieve the purpose of coordinating

  7. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    MonikaFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  8. On the possibilistic mean value and variance of multiplication of fuzzy numbers

    Science.gov (United States)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    In this paper, we introduce the definitions of the possibilistic mean, variance and covariance of multiplication of fuzzy numbers, and show some properties of these definitions. Then, we apply these definitions to build the possibilistic models of portfolio selection under the situations involving uncertainty over the time horizon, by considering the portfolio selection problem from the point of view of possibilistic analysis. Moreover, numerical experiments with real market data indicate that our approach results in better portfolio performance.

  9. 31 CFR 15.737-16 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... POST EMPLOYMENT CONFLICT OF INTEREST Administrative Enforcement Proceedings § 15.737-16 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the...

  10. 29 CFR 1905.6 - Public notice of a granted variance, limitation, variation, tolerance, or exemption.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Public notice of a granted variance, limitation, variation... SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS... General § 1905.6 Public notice of a granted variance, limitation, variation, tolerance, or...

  11. MEAN SQUARED ERRORS OF BOOTSTRAP VARIANCE ESTIMATORS FOR U-STATISTICS

    OpenAIRE

    Mizuno, Masayuki; Maesono, Yoshihiko

    2011-01-01

    In this paper, we obtain an asymptotic representation of the bootstrap variance estimator for a class of U-statistics. Using the representation of the estimator, we will obtain a mean squared error of the variance estimator until the order n^. Also we compare the bootstrap and the jackknife variance estimators, theoretically.

  12. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  13. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    Science.gov (United States)

    Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  14. Estimating the Variance of the K-Step Ahead Predictor for Time-Series

    OpenAIRE

    Tjärnström, Fredrik

    1999-01-01

    This paper considers the problem of estimating the variance of a linear k-step ahead predictor for time series. (The extension to systems including deterministic inputs is straight forward.) We compare the theoretical results with empirically calculated variance on real data, and discuss the quality of the achieved variance estimate.

  15. Asymptotic accuracy of the jackknife variance estimator for certain smooth statistics

    OpenAIRE

    Gottlieb, Alex D

    2001-01-01

    We show that that the jackknife variance estimator $v_{jack}$ and the the infinitesimal jackknife variance estimator are asymptotically equivalent if the functional of interest is a smooth function of the mean or a smooth trimmed L-statistic. We calculate the asymptotic variance of $v_{jack}$ for these functionals.

  16. Monochromaticity of orientation maps in v1 implies minimum variance for hypercolumn size.

    Science.gov (United States)

    Afgoustidis, Alexandre

    2015-01-01

    In the primary visual cortex of many mammals, the processing of sensory information involves recognizing stimuli orientations. The repartition of preferred orientations of neurons in some areas is remarkable: a repetitive, non-periodic, layout. This repetitive pattern is understood to be fundamental for basic non-local aspects of vision, like the perception of contours, but important questions remain about its development and function. We focus here on Gaussian Random Fields, which provide a good description of the initial stage of orientation map development and, in spite of shortcomings we will recall, a computable framework for discussing general principles underlying the geometry of mature maps. We discuss the relationship between the notion of column spacing and the structure of correlation spectra; we prove formulas for the mean value and variance of column spacing, and we use numerical analysis of exact analytic formulae to study the variance. Referring to studies by Wolf, Geisel, Kaschube, Schnabel, and coworkers, we also show that spectral thinness is not an essential ingredient to obtain a pinwheel density of π, whereas it appears as a signature of Euclidean symmetry. The minimum variance property associated to thin spectra could be useful for information processing, provide optimal modularity for V1 hypercolumns, and be a first step toward a mathematical definition of hypercolumns. A measurement of this property in real maps is in principle possible, and comparison with the results in our paper could help establish the role of our minimum variance hypothesis in the development process. PMID:25859421

  17. Estimation of population variance in contributon Monte Carlo

    International Nuclear Information System (INIS)

    Based on the theory of contributons, a new Monte Carlo method known as the contributon Monte Carlo method has recently been developed. The method has found applications in several practical shielding problems. The authors analyze theoretically the variance and efficiency of the new method, by taking moments around the score. In order to compare the contributon game with a game of simple geometrical splitting and also to get the optimal placement of the contributon volume, the moments equations were solved numerically for a one-dimensional, one-group problem using a 10-mfp-thick homogeneous slab. It is found that the optimal placement of the contributon volume is adjacent to the detector; even at its most optimal the contributon Monte Carlo is less efficient than geometrical splitting

  18. A Bound on the Variance of the Waiting Time in a Queueing System

    CERN Document Server

    Eschenfeldt, Patrick; Pippenger, Nicholas

    2011-01-01

    Kingman has shown, under very weak conditions on the interarrival- and sevice-time distributions, that First-Come-First-Served minimizes the variance of the waiting time among possible service disciplines. We show, under the same conditions, that Last-Come-First-Served maximizes the variance of the waiting time, thereby giving an upper bound on the variance among all disciplines.

  19. A comparison of variance reduction techniques for radar simulation

    Science.gov (United States)

    Divito, A.; Galati, G.; Iovino, D.

    Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.

  20. Local orbitals by minimizing powers of the orbital variance

    DEFF Research Database (Denmark)

    Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;

    2011-01-01

    It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may be...... encountered. These disappear when the exponent is larger than one. For a small penalty, the occupied orbitals are more local than the virtual ones. When the penalty is increased, the locality of the occupied and virtual orbitals becomes similar. In fact, when increasing the cardinal number for Dunning...

  1. Bias-variance analysis in estimating true query model for information retrieval

    OpenAIRE

    Zhang, Peng; Song, Dawei; Wang, Jun; Hou, Yue

    2014-01-01

    The estimation of query model is an important task in language modeling (LM) approaches to information retrieval (IR). The ideal estimation is expected to be not only effective in terms of high mean retrieval performance over all queries, but also stable in terms of low variance of retrieval performance across different queries. In practice, however, improving effectiveness can sacrifice stability, and vice versa. In this paper, we propose to study this tradeoff from a new perspective, i.e., ...

  2. Minimum variance linear unbiased estimators of loss and inventory

    International Nuclear Information System (INIS)

    The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs

  3. Concept design theory and model for multi-use space facilities: Analysis of key system design parameters through variance of mission requirements

    Science.gov (United States)

    Reynerson, Charles Martin

    This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.

  4. Using adapted budget cost variance techniques to measure the impact of Lean – based on empirical findings in Lean case studies

    DEFF Research Database (Denmark)

    Kristensen, Thomas Borup

    2015-01-01

    Lean is dominating management philosophy, but the management accounting techniques that best supports this is still not fully understood. Especially how Lean fits traditional budget variance analysis, which is a main theme of every management accounting textbook. I have studied three Scandinavian...... excellent Lean performing companies and their development of budget variance analysis techniques. Based on these empirical findings techniques are presented to calculate cost and cost variances in the Lean companies. First of all, a cost variance is developed to calculate the Lean cost benefits within...... the budget period by using master budget standards and updated standards. The variance between them represents systematic Lean cost improvements. Secondly, an additional cost variance calculation technique is introduced to assess improved and systematic cost variances across multiple budget periods...

  5. The ALHAMBRA survey : Estimation of the clustering signal encoded in the cosmic variance

    CERN Document Server

    López-Sanjuan, C; Hernández-Monteagudo, C; Arnalte-Mur, P; Varela, J; Viironen, K; Fernández-Soto, A; Martínez, V J; Alfaro, E; Ascaso, B; del Olmo, A; Díaz-García, L A; Hurtado-Gil, Ll; Moles, M; Molino, A; Perea, J; Pović, M; Aguerri, J A L; Aparicio-Villegas, T; Benítez, N; Broadhurst, T; Cabrera-Caño, J; Castander, F J; Cepa, J; Cerviño, M; Cristóbal-Hornillos, D; Delgado, R M González; Husillos, C; Infante, L; Márquez, I; Masegosa, J; Prada, F; Quintana, J M

    2015-01-01

    The relative cosmic variance ($\\sigma_v$) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the $\\sigma_v$ measured in the ALHAMBRA survey. We measure the cosmic variance of several galaxy populations selected with $B-$band luminosity at $0.35 \\leq z < 1.05$ as the intrinsic dispersion in the number density distribution derived from the 48 ALHAMBRA subfields. We compare the observational $\\sigma_v$ with the cosmic variance of the dark matter expected from the theory, $\\sigma_{v,{\\rm dm}}$. This provides an estimation of the galaxy bias $b$. The galaxy bias from the cosmic variance is in excellent agreement with the bias estimated by two-point correlation function analysis in ALHAMBRA. This holds for different redshift bins, for red and blue subsamples, and for several ...

  6. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  7. Sensitivity analysis of a final repository model with quasi-discrete behaviour using quasi-random sampling and a metamodel approach in comparison to other variance-based techniques

    International Nuclear Information System (INIS)

    This paper contributes to the investigation of recent computationally efficient variance-based methods for sensitivity analysis and sampling schemes on the basis of a Performance Assessment (PA) model for a repository for Low- and Intermediate-Level Radioactive Waste (LILW) in an abandonned salt mine. The PA model takes account of typical characteristics of repository systems including a quasi-discrete nature. The results indicate best convergence of the State-Dependent-Parameter (SDP) metamodel method and the simple first-order sensitivity indices (SI1) calculation scheme (EASI) if combined with the LpTau low discrepancy sampling method. The SI1 indices obtained from the SDP and EASI methods are very similar. The Fourier Amplitude Sensitivity Test (EFAST) seems to have some convergence problems, likely due to the quasi-discrete behaviour of the PA model. Due to parameter interactions, none of the investigated methods, however, could directly determine all significant parameters. This is illustrated by means of a simplified way of factor fixing. - Highlights: • Recent methods for sensitivity analysis and sampling were investigated. • Focus of the methods was to reduce CPU cost to obtain sensitivity indices. • The methods were applied to a complex, non-continuous model. • Best results of the methods were obtained in combination with LpTau sampling. • The SDP metamodel approach and the simple EASI scheme yielded very similar results

  8. Long Memory Modelling of Inflation with Stochastic Variance and Structural Breaks

    OpenAIRE

    C.S. Bos; Ooms, M.; Koopman, S.J.

    2007-01-01

    We investigate changes in the time series characteristics of postwar U.S. inflation. In a model-based analysis the conditional mean of inflation is specified by a long memory autoregressive fractionally integrated moving average process and the conditional variance is modelled by a stochastic volatility process. We develop a Monte Carlo maximum likelihood method to obtain efficient estimates of the parameters using a monthly dataset of core inflation for which we consider different subsamples...

  9. 公众太阳能光伏发电采纳意愿的差异分析%Variance Analysis of Public's Willingness to Adopt Solar Photovoltaic Power Generation

    Institute of Scientific and Technical Information of China (English)

    丁丽萍; 李文静; 帅传敏

    2015-01-01

    该文基于330份调查问卷数据 ,采用单因素方差分析的Dunnett's t3检验方法 ,对不同年龄段、不同学历层次、不同收入水平、不同性别和不同行业公众的太阳能光伏发电采纳意愿的差异进行了检验 ;然后采用回归模型对影响太阳能光伏发电采纳意愿的人口变量进行了回归分析 .%Using the data from 330 questionnaires ,this paper adopts the single-factor analysis of variance (Dunnett's t3) to test the difference in the public's willingness to accept photovoltaic power generation based on their differences in age ,education background ,income level ,gender and profession .Then ,a re-gression analysis is conducted of the demographic variables affecting the willingness of the public to adopt solar-photovoltaic power generation .Finally ,some policy recommendations are offered based on the re-sults of the above analyses .

  10. A Comparison of Potential Temperature Variance Budgets over Vegetated and Non-Vegetated Surface

    Science.gov (United States)

    Hang, C.; Nadeau, D.; Jensen, D. D.; Pardyjak, E.

    2015-12-01

    Over the past decades, researchers have achieved a fundamental understanding of the budgets of turbulent variables over simplified and (more recently) complex terrain. However, potential temperature variance budgets, parameterized in most meteorological models, are still poorly understood even under relatively idealized conditions. Although each term of the potential temperature variance budget has been studied over different stabilities and surfaces, a detailed understanding of turbulent heat transport over different types of surfaces is still missing. The objectives of this study are thus: 1) to quantify the significant terms in the potential temperature variance budget equation; 2) to show the variability of the budget terms as a function of height and stability; 3) to model the potential temperature variance decay in the late-afternoon and early-evening periods. To do this, we rely on near-surface turbulence observations collected within the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) program, which was designed to better understand the physics governing processes in mountainous terrain. As part of MATERHORN, large field campaigns were conducted in October 2012 and May 2013 in western Utah. Here, we contrast two field sites: a desert playa (dry lakebed), characterized by a flat surface devoid of vegetation, and a vegetated site, characterized by a low-elevation valley floor covered with greasewood vegetation. As expected, preliminary data analysis reveals that the production and molecular dissipation terms play important roles in the variance budget, however the turbulent transport term is also significant during certain time periods at lower levels (i.e., below 5 m). Our results also show that all three terms decrease with increasing height below 10 m and remain almost constant between 10 m to 25 m, which indicates an extremely shallow surface layer (i.e. 10 m). Further, at all heights and times an imbalance between production and

  11. Statistical weights as variance reduction method in back-scattered gamma radiation Monte Carlo spectrometry analysis of thickness gauge detector response

    International Nuclear Information System (INIS)

    The possibility of determining physical quantities (such as the number of particles behind shields of given thickness, energy spectra, detector responses, etc.) with a satisfactory statistical uncertainty, in a relatively short computing time, can be used as a measure of the efficiency of a Monte Carlo method. The numerical simulation of rare events with a straightforward Monte Carlo method is inefficient due to the great number of histories, without scores. In this paper, for the specific geometry of a gamma backscattered thickness gauge, with 60Co and 137Cs as gamma sources, the back-scattered gamma spectrum, probabilities for back-scattering and the spectral characteristics of the detector response were determined using a nonanalog Monte Carlo game with statistical weights applied. (author)

  12. Age and Gender Differences Associated with Family Communication and Materialism among Young Urban Adult Consumers in Malaysia: A One-Way Analysis of Variance (ANOVA)

    OpenAIRE

    Eric V. Bindah; Md Nor Othman

    2012-01-01

    The main purpose of this study is to examine the differences in age and gender among the various types of family communication patterns that takes place at home among young adult consumers. It is also an attempt to examine if there are differences in age and gender on the development of materialistic values in Malaysia. This paper briefly conceptualizes the family communication processes based on existing literature to illustrate the association between family communication patterns and mater...

  13. Estimation of Variance Components for Litter Size in the First and Later Parities in Improved Jezersko-Solcava Sheep

    Directory of Open Access Journals (Sweden)

    Dubravko Škorput

    2011-12-01

    Full Text Available Aim of this study was to estimate variance components for litter size in Improved Jezersko-Solcava sheep. Analysis involved 66,082 records from 12,969 animals, for the number of lambs born in all parities (BA, the first parity (B1, and later parities (B2+. Fixed part of model contained the effects of season and age at lambing within parity. Random part of model contained the effects of herd, permanent effect (for repeatability models, and additive genetic effect. Variance components were estimated using the restricted maximum likelihood method. The average number of lambs born was 1.36 in the first parity, while the average in later parities was 1.59 leading also to about 20% higher variance. Several models were tested in order to accommodate markedly different variability in litter size between the first and later parities: single trait model (for BA, B1, and B2+, two-trait model (for B1 and B2+, and single trait model with heterogeneous residual variance (for BA. Comparison of variance components between models showed largest differences for the residual variance, resulting in parsimonious fit for a single trait model for BA with heterogeneous residual variance. Correlations among breeding values from different models were high and showed remarkable performance of the standard single trait repeatability model for BA.

  14. ESTIMATES OF GENETIC VARIANCE COMPONENT OF AN EQUILIBRIUM POPULATION OF CORN

    OpenAIRE

    Hamirul Hadini; Nasrullah; Taryono .; Panjisakti Basunanda

    2015-01-01

    There are abundant maize populations in Hardy-Weinberg equilibrium, which can be used as source of gene to develop either a hybrid variety or an open pollinated variety. Genetic parameters of a population, such as additive genetic variance and variance due to dominance which can be estimated using North Carolina Design I, were used to decide a breeding method to be applied. The objectives of this research were to estimate the genetic variance component of important quantitative traits in an e...

  15. Analysis of health trait data from on-farm computer systems in the U.S. I: Pedigree and genomic variance components estimation

    Science.gov (United States)

    With an emphasis on increasing profit through increased dairy cow production, a negative relationship with fitness traits such as fertility and health traits has become apparent. Decreased cow health can impact herd profitability through increased rates of involuntary culling and decreased or lost m...

  16. THE VARIANCE AND TREND OF INTEREST RATE – CASE OF COMMERCIAL BANKS IN KOSOVO

    Directory of Open Access Journals (Sweden)

    Fidane Spahija

    2015-09-01

    Full Text Available Today’s debate on the interest rate is characterized by three key issues: the interest rate as a phenomenon, the interest rate as a product of factors (dependent variable, and the interest rate as a policy instrument (independent variable. In this article, the variance in interest rates, as the dependent variable, comes in two statistical sizes: the variance and trend. The interest rates include the price of loans and deposits. The analysis of interest rates on deposits and loan is conducted for non-financial corporation and family economy. This study looks into a statistical analysis, to highlight the variance and trends of interest rates for the period 2004-2013, for deposits and loans in commercial banks in Kosovo. The interest rate is observed at various levels. Is it high, medium or low? Does it explain growth trends, keep constant, or reduce? The trend is observed whether commercial banks maintain, reduce, or increase the interest rate in response to the policy that follows the Central Bank of Kosovo. The data obtained will help to determine the impact of interest rate in the service sector, investment, consumption, and unemployment.

  17. Variances of volume, surface area and length estimates by 3D virtual grids

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří; Kubínová, Lucie

    Bologna : Esculapio, 2009 - (Capasso, V.; Aletti, G.; Micheletti, A.), s. 101-106 ISBN 978-88-7488-310-3. [European Congress of Stereology and Image Analysis /10./. Milano (IT), 22.06.2009-26.06.2009] R&D Projects: GA MŠk(CZ) LC06063; GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : variance * stereology Subject RIV: BA - General Mathematics

  18. Time Variability of Quasars: the Structure Function Variance

    Science.gov (United States)

    MacLeod, C.; Ivezić, Ž.; de Vries, W.; Sesar, B.; Becker, A.

    2008-12-01

    Significant progress in the description of quasar variability has been recently made by employing SDSS and POSS data. Common to most studies is a fundamental assumption that photometric observations at two epochs for a large number of quasars will reveal the same statistical properties as well-sampled light curves for individual objects. We critically test this assumption using light curves for a sample of ~2,600 spectroscopically confirmed quasars observed about 50 times on average over 8 years by the SDSS stripe 82 survey. We find that the dependence of the mean structure function computed for individual quasars on luminosity, rest-frame wavelength and time is qualitatively and quantitatively similar to the behavior of the structure function derived from two-epoch observations of a much larger sample. We also reproduce the result that the variability properties of radio and X-ray selected subsamples are different. However, the scatter of the variability structure function for fixed values of luminosity, rest-frame wavelength and time is similar to the scatter induced by the variance of these quantities in the analyzed sample. Hence, our results suggest that, although the statistical properties of quasar variability inferred using two-epoch data capture some underlying physics, there is significant additional information that can be extracted from well-sampled light curves for individual objects.

  19. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  20. Cosmic variance and the measurement of the local Hubble parameter.

    Science.gov (United States)

    Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel

    2013-06-14

    There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary. PMID:25165911

  1. Variance and Predictability of Precipitation at Seasonal-to-Interannual Timescales

    Science.gov (United States)

    Koster, Randal D.; Suarez, Max J.; Heiser, Mark

    1999-01-01

    A series of atmospheric general circulation model (AGCM) simulations, spanning a total of several thousand years, is used to assess the impact of land-surface and ocean boundary conditions on the seasonal-to-interannual variability and predictability of precipitation in a coupled modeling system. In the first half of the analysis, which focuses on precipitation variance, we show that the contributions of ocean, atmosphere, and land processes to this variance can be characterized, to first order, with a simple linear model. This allows a clean separation of the contributions, from which we find: (1) land and ocean processes have essentially different domains of influence, i.e., the amplification of precipitation variance by land-atmosphere feedback is most important outside of the regions (mainly in the tropics) that are most affected by sea surface temperatures; and (2) the strength of land-atmosphere feedback in a given region is largely controlled by the relative availability of energy and water there. In the second half of the analysis, the potential for seasonal-to-interannual predictability of precipitation is quantified under the assumption that all relevant surface boundary conditions (in the ocean and on land) are known perfectly into the future. We find that the chaotic nature of the atmospheric circulation imposes fundamental limits on predictability in many extratropical regions. Associated with this result is an indication that soil moisture initialization or assimilation in a seasonal-to-interannual forecasting system would be beneficial mainly in transition zones between dry and humid regions.

  2. Variance bounding Markov chains

    OpenAIRE

    Roberts, Gareth O.; Jeffrey S. Rosenthal

    2008-01-01

    We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.

  3. The Partitioning of Variance Habits of Correlational and Experimental Psychologists: The Two Disciplines of Scientific Psychology Revisited

    Science.gov (United States)

    Krus, David J.; Krus, Patricia H.

    1978-01-01

    The conceptual differences between coded regression analysis and traditional analysis of variance are discussed. Also, a modification of several SPSS routines is proposed which allows for direct interpretation of ANOVA and ANCOVA results in a form stressing the strength and significance of scrutinized relationships. (Author)

  4. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  5. Comprehensive Study on the Estimation of the Variance Components of Traverse Nets

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper advances a new simplified formula for estimating variance components ,sums up the basic law to calculate the weights of observed values and a circulation method using the increaments of weights when estimating the variance components of traverse nets,advances the charicteristic roots method to estimate the variance components of traveres nets and presents a practical method to make two real and symmetric matrices two diagonal ones.

  6. Spatiotemporal Dynamics of the Variance of the Wind Velocity from Mini-Sodar Measurements

    Science.gov (United States)

    Krasnenko, N. P.; Kapegesheva, O. F.; Tarasenkov, M. V.; Shamanaeva, L. G.

    2015-12-01

    Statistical trends of the spatiotemporal dynamics of the variance of the three wind velocity components in the atmospheric boundary layer have been established from Doppler mini-sodar measurements. Over the course of a 5-day period of measurements in the autumn time frame from 12 to 16 September 2003, values of the variance of the x- and y-components of the wind velocity lay in the interval from 0.001 to 10 m2/s2, and for the z-component, from 0.001 to 1.2 m2/s2. They were observed to grow during the morning hours (around 11:00 local time) and in the evening (from 18:00 to 22:00 local time), which is explained by the onset of heating and subsequent cooling of the Earth's surface, which are accompanied by an increase in the motion of the air masses. Analysis of the obtained vertical profiles of the standard deviations of the three wind velocity components showed that growth of σ x and σ y with altitude is well described by a power-law dependence with its exponent varying from 0.22 to 1.3 as a function of the time of day while σ z varies according to a linear law. At night (from 00:00 to 5:00 local time) the variance of the z-component changes from 0.01 to 0.56 m2/s2, which is in good agreement with the data available in the literature. Fitting parameters are found and the error of the corresponding fits is estimated, which makes it possible to describe the diurnal dynamics of the wind velocity variance.

  7. Variance reduction in MCMC

    OpenAIRE

    Mira Antonietta; Tenconi Paolo; Bressanini Dario

    2003-01-01

    We propose a general purpose variance reduction technique for MCMC estimators. The idea is obtained by combining standard variance reduction principles known for regular Monte Carlo simulations (Ripley, 1987) and the Zero-Variance principle introduced in the physics literature (Assaraf and Caffarel, 1999). The potential of the new idea is illustrated with some toy examples and an application to Bayesian estimation

  8. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11. ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.347, year: 2014

  9. Components of variance and heritability of resistance to important fungal diseases agents in grapevine

    Directory of Open Access Journals (Sweden)

    Nikolić Dragan

    2006-01-01

    Full Text Available In four interspecies crossing combinations of grapevine (Seedling 108 x Muscat Hamburg, Muscat Hamburg x Seedling 108, S.V.I8315 x Muscat Hamburg and Muscat Hamburg x S.V.I2375 during three years period, resistance to important fungal diseases agents (Plasmopara viticola and Botrytis cinerea were examined. Based on results of analysis of variance, for investigated characteristics, components of variance, coefficients of genetic and phenotypic variation and coefficient of heritability in a broader sense were calculated. It was established that for both characteristics and in all crossing combinations, genetic variance took the biggest part in total variability. The lowest coefficients of genetic and phenotypic variation were established for both properties in crossing combination Seedling 108 x Muscat Hamburg. The highest coefficients of genetic and phenotypic variation were determined for leaf resistance to Plasmopara viticola in crossing combination Muscat Hamburg x S.V.I2375, and for bunch resistance to Botrytis cinerea in crossing combination Muscat Hamburg x Seedling 108. Considering all investigated crossing combinations, coefficient of heritability for leaf resistance to Plasmopara viticola was from 87.23% to 94.88%, and for bunch resistance to Botrytis cinerea from 88.04% to 93.32%. .

  10. Comparison of bed form variance spectra within a meander bend during flood and average discharge.

    Science.gov (United States)

    Levey, R.A.; Kjerfve, B.; Getzen, R.T.

    1980-01-01

    Time series analysis of streambed elevation in a meander bend along the Congaree River was used to determine the changes in bed form population succeeding a 16-year flood event. Variance spectra computed for a 595m longitudinal profile indicate that: a) the bed form variance for the flood record is significantly greater for all wavelengths from 5 to 30m; b) no well-demarcated bed form classes were present during the survey times, pointing to the possible existence of a continuum of bed form sizes rather than well-defined classes; and c) bed forms produced by the flood discharge were rapidly altered as the stage returned toward average level. -from Authors

  11. Patient population management: taking the leap from variance analysis to outcomes measurement.

    Science.gov (United States)

    Allen, K M

    1998-01-01

    Case managers today at BCHS have a somewhat different role than at the onset of the Collaborative Practice Model. They are seen throughout the organization as: Leaders/participants on cross-functional teams. Systems change agents. Integrating/merging with quality services and utilization management. Outcomes managers. One of the major cross-functional teams is in the process of designing a Care Coordinator role. These individuals will, as one of their functions, assume responsibility for daily patient care management activities. A variance tracking program has come into the Utilization Management (UM) department as part of a software package purchased to automate UM work activities. This variance program could potentially be used by the new care coordinators as the role develops. The case managers are beginning to use a Decision Support software, (Transition Systems Inc.) in the collection of data that is based on a cost accounting system and linked to clinical events. Other clinical outcomes data bases are now being used by the case manager to help with the collection and measurement of outcomes information. Hoshin planning will continue to be a framework for defining and setting the targets for clinical and financial improvements throughout the organization. Case managers will continue to be involved in many of these system-wide initiatives. In the words of Galileo, 1579, "You need to count what's countable, measure what's measurable, and what's not measurable, make measurable." PMID:9601411

  12. Testing linear forms of variance components by generalized fixed-level tests

    OpenAIRE

    Weimann, Boris

    1998-01-01

    This report extends the technique of testing single variance components with generalized fixed-level tests - in situations when nuisance parameters make exact testing impossible - to the more general way of testing hypotheses on linear forms of variance components. An extension of the definition of a generalized test variable leads to a generalized fixed-level test for arbitrary linear hypotheses on variance components in balanced mixed linear models of the ANOVA-type. For point null hypothes...

  13. Alternatives to F-Test in One Way ANOVA in case of heterogeneity of variances (a simulation study

    Directory of Open Access Journals (Sweden)

    Karl Moder

    2010-12-01

    Full Text Available Several articles deal with the effects of inhomogeneous variances in one way analysis of variance (ANOVA. A very early investigation of this topic was done by Box (1954. He supposed, that in balanced designs with moderate heterogeneity of variances deviations of the empirical type I error rate (on experiments based realized α to the nominal one (predefined α for H0 are small. Similar conclusions are drawn by Wellek (2003. For not so moderate heterogeneity (e.g. σ1:σ2:...=3:1:... Moder (2007 showed, that empirical type I error rate is far beyond the nominal one, even with balanced designs. In unbalanced designs the difficulties get bigger. Several attempts were made to get over this problem. One proposal is to use a more stringent α level (e.g. 2.5% instead of 5% (Keppel & Wickens, 2004. Another recommended remedy is to transform the original scores by square root, log, and other variance reducing functions (Keppel & Wickens, 2004, Heiberger & Holland, 2004. Some authors suggest the use of rank based alternatives to F-test in analysis of variance (Vargha & Delaney, 1998. Only a few articles deal with two or multifactorial designs. There is some evidence, that in a two or multi-factorial design type I error rate is approximately met if the number of factor levels tends to infinity for a certain factor while the number of levels is fixed for the other factors (Akritas & S., 2000, Bathke, 2004.The goal of this article is to find an appropriate location test in an oneway analysis of variance situation with inhomogeneous variances for balanced and unbalanced designs based on a simulation study.

  14. Genetically controlled environmental variance for sternopleural bristles in Drosophila melanogaster - an experimental test of a heterogeneous variance model

    DEFF Research Database (Denmark)

    Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker;

    2007-01-01

    The objective of this study was to test the hypothesis that the environmental variance of sternopleural bristle number in Drosophila melanogaster is partly under genetic control. We used data from 20 inbred lines and 10 control lines to test this hypothesis. Two models were used: a standard......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...

  15. Algorithm of Text Vectors Feature Mining Based on Multi Factor Analysis of Variance%基于多因素方差分析的文本向量特征挖掘算法

    Institute of Scientific and Technical Information of China (English)

    谭海中; 何波

    2015-01-01

    The text feature vector mining applied to information resources organization and management field, in the field of data mining and has great application value, characteristic vector of traditional text mining algorithm using K-means algo⁃rithm , the accuracy is not good. A new method based on multi factor variance analysis of the characteristics of mining algo⁃rithm of text vector. The features used multi factor variance analysis method to obtain a variety of corpora mining rules, based on ant colony algorithm, based on ant colony fitness probability regular training transfer rule, get the evolution of pop⁃ulation of recent data sets obtained effective moment features the maximum probability, the algorithm selects K-means ini⁃tial clustering center based on optimized division, first division of the sample data, then according to the sample distribu⁃tion characteristics to determine the initial cluster center, improve the performance of text feature mining, the simulation re⁃sults show that, this algorithm improves the clustering effect of the text feature vectors, and then improve the performance of feature mining, data feature has higher recall rate and detection rate, time consuming less, greater in the application of data mining in areas such as value.%文本向量特征挖掘应用于信息资源组织和管理领域,在大数据挖掘领域具有较大应用价值,传统算法精度不好。提出一种基于多因素方差分析的文本向量特征挖掘算法。使用多因素方差分析方法得到多种语料库的特征挖掘规律,结合蚁群算法,根据蚁群适应度概率正则训练迁移法则,得到种群进化最近时刻获得的数据集有效特征概率最大值,基于最优划分的K-means初始聚类中心选取算法,先对数据样本进行划分,然后根据样本分布特点来确定初始聚类中心,提高文本特征挖掘性能。仿真结果表明,该算法提高了文本向量特征的聚类效

  16. Physicochemical factors affecting the spatial variance of monomethylmercury in artificial reservoirs.

    Science.gov (United States)

    Noh, Seam; Kim, Chan-Kook; Lee, Jong-Hyeon; Kim, Younghee; Choi, Kyunghee; Han, Seunghee

    2016-01-01

    The aim of this study was to identify how hydrologic factors (e.g., rainfall, maximum depth, reservoir and catchment area, and water residence time) and water chemistry factors (e.g., conductivity, pH, suspended particulate matter, chlorophyll-a, dissolved organic carbon, and sulfate) interact to affect the spatial variance in monomethylmercury (MMHg) concentration in nine artificial reservoirs. We hypothesized that the MMHg concentration of reservoir water would be higher in eutrophic than in oligotrophic reservoirs because increased dissolved organic matter and sulfate in eutrophic reservoirs can promote in situ production of MMHg. Multiple tools, including Pearson correlation, a self-organizing map, and principal component analysis, were applied in the statistical modeling of Hg species. The results showed that rainfall amount and hydraulic residence time best explained the variance of dissolved Hg and dissolved MMHg in reservoir water. High precipitation events and residence time may mobilize Hg and MMHg in the catchment and reservoir sediment, respectively. On the contrary, algal biomass was a key predictor of the variance of the percentage fraction of unfiltered MMHg over unfiltered Hg (%MMHg). The creation of suboxic conditions and the supply of sulfate subsequent to the algal decomposition seemed to support enhanced %MMHg in the bloom reservoirs. Thus, the nutrient supply should be carefully managed to limit increases in the %MMHg/Hg of temperate reservoirs. PMID:26552526

  17. Implementation of background scattering variance reduction on the RapidNano particle scanner

    OpenAIRE

    van der Walle, P.; Hannemann, S.; Eijk, D.(Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands); Mulckhuyse, W.F.W.; Donck, J.C.J. van der

    2014-01-01

    The background in simple dark field particle inspection shows a high scatter variance which cannot be distinguished from signals by small particles. According to our models, illumination from different azimuths can reduce the background variance. A multi-azimuth illumination has been successfully integrated on the Rapid Nano particle scanner. This illumination method reduces the variance of the background scattering on substrate roughness. It allows for a lower setting of the detection thresh...

  18. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach is not...

  19. Models of Postural Control: Shared Variance in Joint and COM Motions.

    Science.gov (United States)

    Kilby, Melissa C; Molenaar, Peter C M; Newell, Karl M

    2015-01-01

    This paper investigated the organization of the postural control system in human upright stance. To this aim the shared variance between joint and 3D total body center of mass (COM) motions was analyzed using multivariate canonical correlation analysis (CCA). The CCA was performed as a function of established models of postural control that varied in their joint degrees of freedom (DOF), namely, an inverted pendulum ankle model (2DOF), ankle-hip model (4DOF), ankle-knee-hip model (5DOF), and ankle-knee-hip-neck model (7DOF). Healthy young adults performed various postural tasks (two-leg and one-leg quiet stances, voluntary AP and ML sway) on a foam and rigid surface of support. Based on CCA model selection procedures, the amount of shared variance between joint and 3D COM motions and the cross-loading patterns we provide direct evidence of the contribution of multi-DOF postural control mechanisms to human balance. The direct model fitting of CCA showed that incrementing the DOFs in the model through to 7DOF was associated with progressively enhanced shared variance with COM motion. In the 7DOF model, the first canonical function revealed more active involvement of all joints during more challenging one leg stances and dynamic posture tasks. Furthermore, the shared variance was enhanced during the dynamic posture conditions, consistent with a reduction of dimension. This set of outcomes shows directly the degeneracy of multivariate joint regulation in postural control that is influenced by stance and surface of support conditions. PMID:25973896

  20. Variance in population firing rate as a measure of slow time-scale correlation

    Directory of Open Access Journals (Sweden)

    Adam C. Snyder

    2013-12-01

    Full Text Available Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research.

  1. Selection for uniformity in livestock by exploiting genetic heterogeneity of residual variance

    Directory of Open Access Journals (Sweden)

    Hill William G

    2008-01-01

    Full Text Available Abstract In some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters, breeding goal, number of progeny per sire and breeding scheme on selection responses in mean and variance when applying index selection. Genetic parameters were obtained from the literature. Economic values for the mean and variance were derived for some standard non-linear profit equations, e.g. for traits with an intermediate optimum. The economic value of variance was in most situations negative, indicating that selection for reduced variance increases profit. Predicted responses in residual variance after one generation of selection were large, in some cases when the number of progeny per sire was at least 50, by more than 10% of the current residual variance. Progeny testing schemes were more efficient than sib-testing schemes in decreasing residual variance. With optimum traits, selection pressure shifts gradually from the mean to the variance when approaching the optimum. Genetic improvement of uniformity is particularly interesting for traits where the current population mean is near an intermediate optimum.

  2. Mean and variance evolutions of the hot and cold temperatures in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Parey, Sylvie [EDF/R and D, Chatou Cedex (France); Dacunha-Castelle, D. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); Hoang, T.T.H. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); EDF/R and D, Chatou Cedex (France)

    2010-02-15

    In this paper, we examine the trends of temperature series in Europe, for the mean as well as for the variance in hot and cold seasons. To do so, we use as long and homogenous series as possible, provided by the European Climate Assessment and Dataset project for different locations in Europe, as well as the European ENSEMBLES project gridded dataset and the ERA40 reanalysis. We provide a definition of trends that we keep as intrinsic as possible and apply non-parametric statistical methods to analyse them. Obtained results show a clear link between trends in mean and variance of the whole series of hot or cold temperatures: in general, variance increases when the absolute value of temperature increases, i.e. with increasing summer temperature and decreasing winter temperature. This link is reinforced in locations where winter and summer climate has more variability. In very cold or very warm climates, the variability is lower and the link between the trends is weaker. We performed the same analysis on outputs of six climate models proposed by European teams for the 1961-2000 period (1950-2000 for one model), available through the PCMDI portal for the IPCC fourth assessment climate model simulations. The models generally perform poorly and have difficulties in capturing the relation between the two trends, especially in summer. (orig.)

  3. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets

  4. Deflation as a Method of Variance Reduction for Estimating the Trace of a Matrix Inverse

    CERN Document Server

    Gambhir, Arjun Singh; Orginos, Kostas

    2016-01-01

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors are random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can b...

  5. View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation

    Science.gov (United States)

    Gong, Jie; Wu, Dong L.

    2013-01-01

    Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.

  6. 40 CFR 142.304 - For which of the regulatory requirements is a small system variance available?

    Science.gov (United States)

    2010-07-01

    ... requirements is a small system variance available? 142.304 Section 142.304 Protection of Environment... REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.304 For which of the regulatory requirements is a small system variance available? (a) A small system variance is not available under...

  7. 46 CFR 69.73 - Variance from the prescribed method of measurement.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Variance from the prescribed method of measurement. 69.73 Section 69.73 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DOCUMENTATION AND MEASUREMENT OF VESSELS MEASUREMENT OF VESSELS Convention Measurement System § 69.73 Variance from...

  8. Efficient numerical solution of the variance-optimal hedging problem in geometric Lévy models

    OpenAIRE

    Vesenmayer, Bernhard

    2009-01-01

    The aim of this thesis is to develop a numerical algorithm for the computation of the variance-optimal hedging error for a European option. As by-products the option price and the trading strategy are computed as well. The analysis is restricted to the martingale case and an one-dimensional exponential Lévy process. To this end the hedging error is represented as solution of a system of two integro-differential equations. Due to the resemblance of those to the Kolmogorov backward equation, th...

  9. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  10. Importance of the macroeconomic variables for variance prediction: A GARCH-MIDAS approach

    DEFF Research Database (Denmark)

    Asgharian, Hossein; Hou, Ai Jun; Javed, Farrukh

    2013-01-01

    This paper aims to examine the role of macroeconomic variables in forecasting the return volatility of the US stock market. We apply the GARCH-MIDAS (Mixed Data Sampling) model to examine whether information contained in macroeconomic variables can help to predict short-term and long......-term components of the return variance. We investigate several alternative models and use a large group of economic variables. A principal component analysis is used to incorporate the information contained in different variables. Our results show that including low-frequency macroeconomic information...

  11. Diffusion tensor imaging-derived measures of fractional anisotropy across the pyramidal tract are influenced by the cerebral hemisphere but not by gender in young healthy volunteers: a split-plot factorial analysis of variance

    Institute of Scientific and Technical Information of China (English)

    Ernesto Roldan-Valadez; Edgar Rios-Piedra; Rafael Favila; Sarael Alcauter; Camilo Rios

    2012-01-01

    Background Diffusion tensor imaging (DTI) permits quantitative examination within the pyramidal tract (PT) by measuring fractional anisotropy (FA).To the best of our knowledge,the inter-variability measures of FA along the PT remain unexplained.A clear understanding of these reference values would help radiologists and neuroscientists to understand normality as well as to detect early pathophysiologic changes of brain diseases.The aim of our study was to calculate the variability of the FA at eleven anatomical landmarks along the PT and the influences of gender and cerebral hemisphere in these measurements in a sample of young,healthy volunteers.Methods A retrospective,cross-sectional study was performed in twenty-three right-handed healthy volunteers who underwent magnetic resonance evaluation of the brain.Mean FA values from eleven anatomical landmarks across the PT (at centrum semiovale,corona radiata,posterior limb of internal capsule (PLIC),mesencephalon,pons,and medulla oblongata) were evaluated using split-plot factorial analysis of variance (ANOVA).Results We found a significant interaction effect between anatomical landmark and cerebral hemisphere (F (10,32)=4.516,P=0.001; Wilks' Lambda 0.415,with a large effect size (partial n2=0.585)).The influence of gender end age was non-significant.On average,the midbrain and PLIC FA values were higher than pons and medulla oblongata values; centrum semiovale measurements were higher than those of the corona radiata but lower than PLIC.Conclusions There is a normal variability of FA measurements along PT in healthy individuals,which is influenced by regions of interest location (anatomical landmarks) and cerebral hemisphere.FA measurements should be reported for comparing same-side and same-landmark PT to help avoid comparisons with the contralateral PT; ideally,normative values should exist for a clinically significant age group.A standardized package of selected DTI processing tools would allow DTI processing to be

  12. 75 FR 6220 - Information Collection Requirements for the Variance Regulations; Submission for Office of...

    Science.gov (United States)

    2010-02-08

    ... Paperwork Reduction Act of 1995 (44 U.S.C. 3506 et seq.) and Secretary of Labor's Order No. 5-2007 (72 FR... Occupational Safety and Health Administration Information Collection Requirements for the Variance Regulations..., experimental, permanent, and national defense variances. DATES: Comments must be submitted...

  13. Trends in the Transitory Variance of Male Earnings: Methods and Evidence

    Science.gov (United States)

    Moffitt, Robert A.; Gottschalk, Peter

    2012-01-01

    We estimate the trend in the transitory variance of male earnings in the United States using the Michigan Panel Study of Income Dynamics from 1970 to 2004. Using an error components model and simpler but only approximate methods, we find that the transitory variance started to increase in the early 1970s, continued to increase through the…

  14. Impact of time-inhomogeneous jumps and leverage type effects on returns and realised variances

    DEFF Research Database (Denmark)

    Veraart, Almut

    This paper studies the effect of time-inhomogeneous jumps and leverage type effects on realised variance calculations when the logarithmic asset price is given by a Lévy-driven stochastic volatility model. In such a model, the realised variance is an inconsistent estimator of the integrated...... inhomogeneous ordinary differential equations....

  15. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik;

    2014-01-01

    Stochastic linear systems arise in a large number of control applications. This paper presents a mean-variance criterion for economic model predictive control (EMPC) of such systems. The system operating cost and its variance is approximated based on a Monte-Carlo approach. Using convex relaxation...

  16. The stability of spectroscopic instruments: A unified Allan variance computation scheme

    CERN Document Server

    Ossenkopf, Volker

    2007-01-01

    The Allan variance is a standard technique to characterise the stability of spectroscopic instruments used in astronomical observations. The period for switching between source and reference measurement is often derived from the Allan minimum time. We propose a new approach for the computation of the Allan variance of spectrometer data combining the advantages of the two existing methods into a unified scheme. Using the Allan variance spectrum we derive the optimum strategy for symmetric observing schemes minimising the total uncertainty of the data resulting from radiometric and drift noise. The unified Allan variance computation scheme is designed to trace total-power and spectroscopic fluctuations within the same framework. The method includes an explicit error estimate both for the individual Allan variance spectra and for the derived stability time. A new definition of the instrument stability time allows to characterise the instrument even in the case of a fluctuation spectrum shallower than 1/f, as mea...

  17. On the importance of sampling variance to investigations of temporal variation in animal population size

    Science.gov (United States)

    Link, W.A.; Nichols, J.D.

    1994-01-01

    Our purpose here is to emphasize the need to properly deal with sampling variance when studying population variability and to present a means of doing so. We present an estimator for temporal variance of population size for the general case in which there are both sampling variances and covariances associated with estimates of population size. We illustrate the estimation approach with a series of population size estimates for black-capped chickadees (Parus atricapillus) wintering in a Connecticut study area and with a series of population size estimates for breeding populations of ducks in southwestern Manitoba.

  18. The Correct Kriging Variance Estimated by Bootstrapping

    OpenAIRE

    den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.

    2004-01-01

    The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrapping to estimate the Kriging variance.The new method is tested on several artificial examples and a real-life case study.These results demonstrate that the classic formula underestimates the true Kri...

  19. Biclustering with heterogeneous variance.

    Science.gov (United States)

    Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R

    2013-07-23

    In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637

  20. Analytical Solutions of the Balance Equation for the Scalar Variance in One-Dimensional Turbulent Flows under Stationary Conditions

    Directory of Open Access Journals (Sweden)

    Andrea Amicarelli

    2015-01-01

    Full Text Available This study presents 1D analytical solutions for the ensemble variance of reactive scalars in one-dimensional turbulent flows, in case of stationary conditions, homogeneous mean scalar gradient and turbulence, Dirichlet boundary conditions, and first order kinetics reactions. Simplified solutions and sensitivity analysis are also discussed. These solutions represent both analytical tools for preliminary estimations of the concentration variance and upwind spatial reconstruction schemes for CFD (Computational Fluid Dynamics—RANS (Reynolds-Averaged Navier-Stokes codes, which estimate the turbulent fluctuations of reactive scalars.

  1. Fractal fluctuations and quantum-like chaos in the brain by analysis of variability of brain waves: A new method based on a fractal variance function and random matrix theory: A link with El Naschie fractal Cantorian space-time and V. Weiss and H. Weiss golden ratio in brain

    International Nuclear Information System (INIS)

    We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.

  2. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Directory of Open Access Journals (Sweden)

    Frank M. You

    2016-04-01

    Full Text Available The type 2 modified augmented design (MAD2 is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html.

  3. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Institute of Scientific and Technical Information of China (English)

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).

  4. On the Network Topology of Variance Decompositions: Measuring the Connectedness of Financial Firms

    OpenAIRE

    Francis X. Diebold; Yılmaz, Kamil

    2011-01-01

    The authors propose several connectedness measures built from pieces of variance decompositions, and they argue that they provide natural and insightful measures of connectedness among financial asset returns and volatilities. The authors also show that variance decompositions define weighted, directed networks, so that their connectedness measures are intimately-related to key measures of connectedness used in the network literature. Building on these insights, the authors track both average...

  5. Estimation of the variance of sample means based on nonstationary spatial data with varying expected values

    OpenAIRE

    Ekström, Magnus; Sjöstedt-deLuna, Sara

    2001-01-01

    Subsampling and block resampling methods have been suggested in the literature to nonparametrically estimate the variance of some statistic computed from spatial data. Usually stationary data are required. However, in empirical applications, the assump­ tion of stationarity can often be rejected. This paper proposes nonparametric methods to estimate the variance of sample means based on nonstationary spatial data using subsam­ pling. It is assumed that data is observed on ...

  6. Assessment of ulnar variance: a radiological investigation in a Dutch population

    International Nuclear Information System (INIS)

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  7. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  8. Cross-rater agreement on common and specific variance of personality scales and items

    OpenAIRE

    Mõttus, René; McCrae, Robert R.; Allik, Jüri; Realo, Anu

    2014-01-01

    Using the NEO Personality Inventory-3, we analyzed self/informant agreement on personality traits at three levels that were made statistically independent from each other: domains, facets, and individual items. Cross-rater correlations for the common variance in the five domains ranged from 0.36 to 0.65 (M = 0.49), whereas estimates for the specific variance of the 30 facets ranged from 0.40 to 0.73 (M = 0.56). Cross-rater correlations of residual variance of individual items ranged from -0.1...

  9. Variance partitioning of stream diatom, fish, and invertebrate indicators of biological condition

    Science.gov (United States)

    Zuellig, Robert E.; Carlisle, Daren M.; Meador, Michael R.; Potapova, Marina

    2012-01-01

    Stream indicators used to make assessments of biological condition are influenced by many possible sources of variability. To examine this issue, we used multiple-year and multiple-reach diatom, fish, and invertebrate data collected from 20 least-disturbed and 46 developed stream segments between 1993 and 2004 as part of the US Geological Survey National Water Quality Assessment Program. We used a variance-component model to summarize the relative and absolute magnitude of 4 variance components (among-site, among-year, site × year interaction, and residual) in indicator values (observed/expected ratio [O/E] and regional multimetric indices [MMI]) among assemblages and between basin types (least-disturbed and developed). We used multiple-reach samples to evaluate discordance in site assessments of biological condition caused by sampling variability. Overall, patterns in variance partitioning were similar among assemblages and basin types with one exception. Among-site variance dominated the relative contribution to the total variance (64–80% of total variance), residual variance (sampling variance) accounted for more variability (8–26%) than interaction variance (5–12%), and among-year variance was always negligible (0–0.2%). The exception to this general pattern was for invertebrates at least-disturbed sites where variability in O/E indicators was partitioned between among-site and residual (sampling) variance (among-site  =  36%, residual  =  64%). This pattern was not observed for fish and diatom indicators (O/E and regional MMI). We suspect that unexplained sampling variability is what largely remained after the invertebrate indicators (O/E predictive models) had accounted for environmental differences among least-disturbed sites. The influence of sampling variability on discordance of within-site assessments was assemblage or basin-type specific. Discordance among assessments was nearly 2× greater in developed basins (29–31%) than in least

  10. Variance of size-age curves: Bootstrapping with autocorrelation

    Science.gov (United States)

    Bullock, S.H.; Turner, R.M.; Hastings, J.R.; Escoto-Rodriguez, M.; Lopez, Z.R.A.; Rodrigues-Navarro, J. L.

    2004-01-01

    We modify a method of estimating size-age relations from a minimal set of individual increment data, recognizing that growth depends not only on size but also varies greatly among individuals and is consistent within an individual for several to many time intervals. The method is exemplified with data from a long-lived desert plant and a range of autocorrelation factors encompassing field-measured values. The results suggest that age estimates based on size and growth rates with only moderate autocorrelation are subject to large variation, which raises major problems for prediction or hindcasting for ecological analysis or management.

  11. Verification of the history-score moment equations for weight-window variance reduction

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Clell J [Los Alamos National Laboratory; Sood, Avneet [Los Alamos National Laboratory; Booth, Thomas E [Los Alamos National Laboratory; Shultis, J. Kenneth [KANSAS STATE UNIV.

    2010-12-06

    The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,

  12. Ambiguity Aversion and Variance Premium

    OpenAIRE

    Jianjun Miao; Bin Wei; Hao Zhou

    2012-01-01

    This paper offers an ambiguity-based interpretation of variance premium - the differ- ence between risk-neutral and objective expectations of market return variance - as a com- pounding effect of both belief distortion and variance differential regarding the uncertain economic regimes. Our approach endogenously generates variance premium without impos- ing exogenous stochastic volatility or jumps in consumption process. Such a framework can reasonably match the mean variance premium as well a...

  13. Some new construction methods of variance balanced block designs with repeated blocks

    OpenAIRE

    Ceranka, Bronisław; Graczyk, Małgorzata

    2014-01-01

    Some new construction methods of the variance balanced block designs with repeated blocks are given. They are based on the specialized product of incidence matrices of the balanced incomplete block designs.

  14. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before...... fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean...

  15. Compensatory variances of drug-induced hepatitis B virus YMDD mutations.

    Science.gov (United States)

    Cai, Ying; Wang, Ning; Wu, Xiaomei; Zheng, Kai; Li, Yan

    2016-01-01

    Although the drug-induced mutations of HBV have been ever documented, the evolutionary mechanism is still obscure. To deeply reveal molecular characters of HBV evolution under the special condition, here we made a comprehensive investigation of the molecular variation of the 3432 wild-type sequences and 439 YMDD variants from HBV genotype A, B, C and D, and evaluated the co-variant patterns and the frequency distribution in the different YMDD mutation types and genotypes, by using the naïve Bayes classification algorithm and the complete induction method based on the comparative sequence analysis. The data showed different compensatory changes followed by the rtM204I/V. Although occurrence of the YMDD mutation itself was not related to the HBV genotypes, the subsequence co-variant patterns were related to the YMDD variant types and HBV genotypes. From the hierarchy view, we clarified that historical mutations, drug-induced mutation and compensatory variances, and displayed an inter-conditioned relationship of amino acid variances during multiple evolutionary processes. This study extends the understanding of the polymorphism and fitness of viral protein. PMID:27588233

  16. Variance Component Quantitative Trait Locus Analysis for Body Weight Traits in Purebred Korean Native Chicken.

    Science.gov (United States)

    Cahyadi, Muhammad; Park, Hee-Bok; Seo, Dong-Won; Jin, Shil; Choi, Nuri; Heo, Kang-Nyeong; Kang, Bo-Seok; Jo, Cheorun; Lee, Jun-Heon

    2016-01-01

    Quantitative trait locus (QTL) is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC). F1 samples (n = 595) were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM) of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3) for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001) and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003). Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007) and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027) were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds. PMID:26732327

  17. Intelligence and Language Proficiency as Sources of Variance in Self-Reported Affective Variables.

    Science.gov (United States)

    Oller, John W., Jr.; Perkins, Kyle

    1978-01-01

    Discusses three possible sources of nonrandom but extraneous variance in self-reported attitude data, and demonstrates that these data may be surreptitious measures of verbal intelligence and language proficiency. (Author/AM)

  18. On stabilizing the variance of dynamic functional brain connectivity time series

    OpenAIRE

    Thompson, William Hedley; Fransson, Peter

    2016-01-01

    Assessment of dynamic functional brain connectivity (dFC) based on fMRI data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transform which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is however unclear how well the stabilization of signal variance performed by the Fisher t...

  19. Exponential Smoothing for Inventory Control: Means and Variances of Lead-Time Demand

    OpenAIRE

    Ralph D. Snyder; Anne B. Koehler; Hyndman, Rob J.; J. Keith Ord

    2002-01-01

    Exponential smoothing is often used to forecast lead-time demand for inventory control. In this paper, formulae are provided for calculating means and variances of lead-time demand for a wide variety of exponential smoothing methods. A feature of many of the formulae is that variances, as well as the means, depend on trends and seasonal effects. Thus, these formulae provide the opportunity to implement methods that ensure that safety stocks adjust to changes in trend or changes in season.

  20. Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.

    Science.gov (United States)

    He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne

    2016-04-01

    Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more

  1. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    Science.gov (United States)

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces. PMID:26676106

  2. The Time-Dependent Mean and Variance of the Non-Stationary Markovian Infinite Server System

    Directory of Open Access Journals (Sweden)

    Peter M. Ellis

    2010-01-01

    Full Text Available Problem statement: In many queuing situations the average arrival and service rates vary over time. In those situations a transient solution for the state probabilities and mean and variance must be obtained. Approach: The mean and the variance of a particular infinite server model will be obtained using the state differential-difference equations and the factorial moment generating function. The average arrival and service rates will be taken to be dependent on time. The individual customer interarrival times and service time are assumed to be exponentially distributed. This is known as the Markovian system. Results: The mean and variance of the system will be established as solutions to two sequential linear ordinary differential equations. A comparison is also made to a previously known result for the corresponding system with a finite number of servers. Conclusion: Simple closed-form equations for the mean and variance of the system are presented.

  3. 78 FR 30914 - Grand River Dam Authority Notice of Application for Temporary Variance of License and Soliciting...

    Science.gov (United States)

    2013-05-23

    ... Variance of License and Soliciting Comments, Motions To Intervene and Protests Take notice that the... inspection: a. Application Type: Temporary variance of license. b. Project No.: 1494-416. c. Date Filed... of Request: Grand River Dam Authority (GRDA) requests a temporary variance, for the year 2013,...

  4. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    Science.gov (United States)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  5. Simultaneous Estimation of Additive and Mutational Genetic Variance in an Outbred Population of Drosophila serrata.

    Science.gov (United States)

    McGuigan, Katrina; Aguirre, J David; Blows, Mark W

    2015-11-01

    How new mutations contribute to genetic variation is a key question in biology. Although the evolutionary fate of an allele is largely determined by its heterozygous effect, most estimates of mutational variance and mutational effects derive from highly inbred lines, where new mutations are present in homozygous form. In an attempt to overcome this limitation, middle-class neighborhood (MCN) experiments have been used to assess the fitness effect of new mutations in heterozygous form. However, because MCN populations harbor substantial standing genetic variance, estimates of mutational variance have not typically been available from such experiments. Here we employ a modification of the animal model to analyze data from 22 generations of Drosophila serrata bred in an MCN design. Mutational heritability, measured for eight cuticular hydrocarbons, 10 wing-shape traits, and wing size in this outbred genetic background, ranged from 0.0006 to 0.006 (with one exception), a similar range to that reported from studies employing inbred lines. Simultaneously partitioning the additive and mutational variance in the same outbred population allowed us to quantitatively test the ability of mutation-selection balance models to explain the observed levels of additive and mutational genetic variance. The Gaussian allelic approximation and house-of-cards models, which assume real stabilizing selection on single traits, both overestimated the genetic variance maintained at equilibrium, but the house-of-cards model was a closer fit to the data. This analytical approach has the potential to be broadly applied, expanding our understanding of the dynamics of genetic variance in natural populations. PMID:26384357

  6. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  7. Statistical modelling of tropical cyclone tracks: a comparison of models for the variance of trajectories

    CERN Document Server

    Hall, T; Hall, Tim; Jewson, Stephen

    2005-01-01

    We describe results from the second stage of a project to build a statistical model for hurricane tracks. In the first stage we modelled the unconditional mean track. We now attempt to model the unconditional variance of fluctuations around the mean. The variance models we describe use a semi-parametric nearest neighbours approach in which the optimal averaging length-scale is estimated using a jack-knife out-of-sample fitting procedure. We test three different models. These models consider the variance structure of the deviations from the unconditional mean track to be isotropic, anisotropic but uncorrelated, and anisotropic and correlated, respectively. The results show that, of these models, the anisotropic correlated model gives the best predictions of the distribution of future positions of hurricanes.

  8. Detection of rheumatoid arthritis by evaluation of normalized variances of fluorescence time correlation functions

    Science.gov (United States)

    Dziekan, Thomas; Weissbach, Carmen; Voigt, Jan; Ebert, Bernd; MacDonald, Rainer; Bahner, Malte L.; Mahler, Marianne; Schirner, Michael; Berliner, Michael; Berliner, Birgitt; Osel, Jens; Osel, Ilka

    2011-07-01

    Fluorescence imaging using the dye indocyanine green as a contrast agent was investigated in a prospective clinical study for the detection of rheumatoid arthritis. Normalized variances of correlated time series of fluorescence intensities describing the bolus kinetics of the contrast agent in certain regions of interest were analyzed to differentiate healthy from inflamed finger joints. These values are determined using a robust, parameter-free algorithm. We found that the normalized variance of correlation functions improves the differentiation between healthy joints of volunteers and joints with rheumatoid arthritis of patients by about 10% compared to, e.g., ratios of areas under the curves of raw data.

  9. 40 CFR 142.309 - What are the public meeting requirements associated with the proposal of a small system variance?

    Science.gov (United States)

    2010-07-01

    ... requirements associated with the proposal of a small system variance? 142.309 Section 142.309 Protection of... WATER REGULATIONS IMPLEMENTATION Variances for Small System Public Participation § 142.309 What are the public meeting requirements associated with the proposal of a small system variance? (a) A State or...

  10. Semilogarithmic Nonuniform Vector Quantization of Two-Dimensional Laplacean Source for Small Variance Dynamics

    Directory of Open Access Journals (Sweden)

    Z. Peric

    2012-04-01

    Full Text Available In this paper high dynamic range nonuniform two-dimensional vector quantization model for Laplacean source was provided. Semilogarithmic A-law compression characteristic was used as radial scalar compression characteristic of two-dimensional vector quantization. Optimal number value of concentric quantization domains (amplitude levels is expressed in the function of parameter A. Exact distortion analysis with obtained closed form expressions is provided. It has been shown that proposed model provides high SQNR values in wide range of variances, and overachieves quality obtained by scalar A-law quantization at same bit rate, so it can be used in various switching and adaptation implementations for realization of high quality signal compression.

  11. Optimal Investment and Consumption Decisions under the Constant Elasticity of Variance Model

    Directory of Open Access Journals (Sweden)

    Hao Chang

    2013-01-01

    Full Text Available We consider an investment and consumption problem under the constant elasticity of variance (CEV model, which is an extension of the original Merton’s problem. In the proposed model, stock price dynamics is assumed to follow a CEV model and our goal is to maximize the expected discounted utility of consumption and terminal wealth. Firstly, we apply dynamic programming principle to obtain the Hamilton-Jacobi-Bellman (HJB equation for the value function. Secondly, we choose power utility and logarithm utility for our analysis and apply variable change technique to obtain the closed-form solutions to the optimal investment and consumption strategies. Finally, we provide a numerical example to illustrate the effect of market parameters on the optimal investment and consumption strategies.

  12. Direct measurement of the orbital angular momentum mean and variance in an arbitrary paraxial optical field

    CERN Document Server

    Piccirillo, Bruno; Marrucci, Lorenzo; Santamato, Enrico

    2013-01-01

    We introduce and experimentally demonstrate a method for measuring at the same time the mean and the variance of the photonic orbital angular momentum (OAM) distribution in any paraxial optical field, without passing through the acquisition of its entire angular momentum spectrum. This method hence enables one to reduce the infinitely many output ports required in principle to perform a full OAM spectrum analysis to just two. The mean OAM, in turn, provides direct access to the average mechanical torque that the optical field in any light beam is expected to exert on matter, for example in the case of absorption. Our scheme could also be exploited to weaken the strict alignment requirements usually imposed for OAM-based free-space communication.

  13. 25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the... variance from the standards of the part? (a) Tribal gaming regulatory authority approval. (1) A Tribal gaming regulatory authority may approve a variance for a gaming operation if it has determined that...

  14. Application of variance reduction techniques in Monte Carlo simulation of clinical electron linear accelerator

    International Nuclear Information System (INIS)

    Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.

  15. Splitting the variance of statistical learning performance: A parametric investigation of exposure duration and transitional probabilities.

    Science.gov (United States)

    Bogaerts, Louisa; Siegelman, Noam; Frost, Ram

    2016-08-01

    What determines individuals' efficacy in detecting regularities in visual statistical learning? Our theoretical starting point assumes that the variance in performance of statistical learning (SL) can be split into the variance related to efficiency in encoding representations within a modality and the variance related to the relative computational efficiency of detecting the distributional properties of the encoded representations. Using a novel methodology, we dissociated encoding from higher-order learning factors, by independently manipulating exposure duration and transitional probabilities in a stream of visual shapes. Our results show that the encoding of shapes and the retrieving of their transitional probabilities are not independent and additive processes, but interact to jointly determine SL performance. The theoretical implications of these findings for a mechanistic explanation of SL are discussed. PMID:26743060

  16. Estimation of bias and variance of measurements made from tomography scans

    Science.gov (United States)

    Bradley, Robert S.

    2016-09-01

    Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.

  17. Variance of the coordinates of localization for particles in a random sawtooth potential

    Directory of Open Access Journals (Sweden)

    E.S. Denisova

    2009-01-01

    Full Text Available A new regime of the directed transport of particles in a random sawtooth potential driven by an alternating external force is studied. For this transport regime, which is characterized by zero average velocity of particles and a finite transport distance, the variance of coordinates of particles localization is calculated exactly and analyzed for a particular case of the random sawtooth potential. It has been established that the variance plays an important role in this transport regime. In particular, the root-mean-square displacement of particles can essentially exceed their average displacement in the preferred direction.

  18. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per;

    2012-01-01

    part-whole relationship between these traits. The chromosome-wise genomic proportions of the total variance differed between traits, with some chromosomes explaining higher or lower values than expected in relation to chromosome size. Few chromosomes showed pleiotropic effects and only chromosome 19...... used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances of...

  19. Intraday Speed of Adjustment and the Realized Variance in the Indonesia Stock Exchange

    Directory of Open Access Journals (Sweden)

    Zaafri A Husodo

    2009-01-01

    Full Text Available We examine the intraday trading and price dynamics for frequently traded stocks at the Indonesian Stock Exchange. Using trade price, time series generated at one, two, three, five, ten, fifteen, thirty and sixty-minute intervals, we estimate the speed of adjustment and the corresponding realized variance of these series. The objective of the estimation is to infer the noise impact to the deviation of observed prices from their fundamental value. The result from the speed of adjustment estimate is consistent with the realized variance estimator. Both conclude that the 50 most frequently traded stocks in the Indonesia Stock Exchange adjust to new information within 30 minutes. At the interval, the coefficient of the speed of price adjustment is insignificantly different from zero implying negligible noise impact to the observed price. Concurrently, the realized variance starts to stabilize at 30-minute interval purporting fading impact of noise to the realized variance estimate. The evidence justifies the use of realized variance at various intervals as a reliable indicator of price discovery rate in the Indonesia Stock Exchange.

  20. The genetic and environmental roots of variance in negativity toward foreign nationals.

    Science.gov (United States)

    Kandler, Christian; Lewis, Gary J; Feldhaus, Lea Henrike; Riemann, Rainer

    2015-03-01

    This study quantified genetic and environmental roots of variance in prejudice and discriminatory intent toward foreign nationals and examined potential mediators of these genetic influences: right-wing authoritarianism (RWA), social dominance orientation (SDO), and narrow-sense xenophobia (NSX). In line with the dual process motivational (DPM) model, we predicted that the two basic attitudinal and motivational orientations-RWA and SDO-would account for variance in out-group prejudice and discrimination. In line with other theories, we expected that NSX as an affective component would explain additional variance in out-group prejudice and discriminatory intent. Data from 1,397 individuals (incl. twins as well as their spouses) were analyzed. Univariate analyses of twins' and spouses' data yielded genetic (incl. contributions of assortative mating) and multiple environmental sources (i.e., social homogamy, spouse-specific, and individual-specific effects) of variance in negativity toward strangers. Multivariate analyses suggested an extension to the DPM model by including NSX in addition to RWA and SDO as predictor of prejudice and discrimination. RWA and NSX primarily mediated the genetic influences on the variance in prejudice and discriminatory intent toward foreign nationals. In sum, the findings provide the basis of a behavioral genetic framework integrating different scientific disciplines for the study of negativity toward out-groups. PMID:25534512

  1. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  2. Estimation of Genetic Variance Components Including Mutation and Epistasis using Bayesian Approach in a Selection Experiment on Body Weight in Mice

    DEFF Research Database (Denmark)

    Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke

    selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...

  3. Relative variances of the cadence frequency of cycling under two differential saddle heights

    OpenAIRE

    Chang, Wen-Dien; Fan Chiang, Chin-Yun; Lai, Ping-Tung; Lee, Chia-Lun; Fang, Sz-Ming

    2016-01-01

    [Purpose] Bicycle saddle height is a critical factor for cycling performance and injury prevention. The present study compared the variance in cadence frequency after exercise fatigue between saddle heights with 25° and 35° knee flexion. [Methods] Two saddle heights, which were determined by setting the pedal at the bottom dead point with 35° and 25° knee flexion, were used for testing. The relative variances of the cadence frequency were calculated at the end of a 5-minute warm-up period and...

  4. The Quantum Allan Variance

    OpenAIRE

    Chabuda, Krzysztof; Leroux, Ian; Demkowicz-Dobrzanski, Rafal

    2016-01-01

    In atomic clocks, the frequency of a local oscillator is stabilized based on the feedback signal obtained by periodically interrogating an atomic reference system. The instability of the clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbi...

  5. Variance, Violence, and Democracy: A Basic Microeconomic Model of Terrorism

    Directory of Open Access Journals (Sweden)

    John A. Sautter

    2010-01-01

    Full Text Available Much of the debate surrounding contemporary studies of terrorism focuses upon transnational terrorism. However, historical and contemporary evidence suggests that domestic terrorism is a more prevalent and pressing concern. A formal microeconomic model of terrorism is utilized here to understand acts of political violence in a domestic context within the domain of democratic governance.This article builds a very basic microeconomic model of terrorist decision making to hypothesize how a democratic government might influence the sorts of strategies that terrorists use. Mathematical models have been used to explain terrorist behavior in the past. However, the bulk of inquires in this area have only focused on the relationship between terrorists and the government, or amongst terrorists themselves. Central to the interpretation of the terrorist conflict presented here is the idea that voters (or citizens are also one of the important determinants of how a government will respond to acts of terrorism.

  6. Occupation time fluctuation limits of infinite variance equilibrium branching systems

    OpenAIRE

    Milos, Piotr

    2008-01-01

    We establish limit theorems for the fluctuations of the rescaled occupation time of a $(d,\\alpha,\\beta)$-branching particle system. It consists of particles moving according to a symmetric $\\alpha$-stable motion in $\\mathbb{R}^d$. The branching law is in the domain of attraction of a (1+$\\beta$)-stable law and the initial condition is an equilibrium random measure for the system (defined below). In the paper we treat separately the cases of intermediate $\\alpha/\\beta

  7. Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models

    Science.gov (United States)

    Yoder, Dennis A.

    2016-01-01

    This paper will include a detailed comparison of heat transfer models that rely upon the thermal diffusivity. The goals are to inform users of the development history of the various models and the resulting differences in model formulations, as well as to evaluate the models on a variety of validation cases so that users might better understand which models are more broadly applicable.

  8. Simulating individual-based models of bacterial chemotaxis with asymptotic variance reduction

    CERN Document Server

    Rousset, Mathias

    2011-01-01

    We discuss variance reduced simulations for an individual-based model of chemotaxis of bacteria with internal dynamics. The variance reduction is achieved via a coupling of this model with a simpler process in which the internal dynamics has been replaced by a direct gradient sensing of the chemoattractants concentrations. In the companion paper \\cite{limits}, we have rigorously shown, using a pathwise probabilistic technique, that both processes converge towards the same advection-diffusion process in the diffusive asymptotics. In this work, a direct coupling is achieved between paths of individual bacteria simulated by both models, by using the same sets of random numbers in both simulations. This coupling is used to construct a hybrid scheme with reduced variance. We first compute a deterministic solution of the kinetic density description of the direct gradient sensing model; the deviations due to the presence of internal dynamics are then evaluated via the coupled individual-based simulations. We show th...

  9. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    variants influence complex diseases. Despite the successes, the variants identified as being statistically significant have generally explained only a small fraction of the heritable component of the trait, the so-called problem of missing heritability. Insufficient modelling of the underlying genetic...... that the associated genetic variants are enriched for genes that are connected in biol ogical pathways or for likely functional effects on genes. These biological findings provide valuable insight for developing better genomic models. These are statistical models for predicting complex trait phenotypes...... on the basis of single nucleotide polymorphism (SNP) data and trait phenotypes and can account for a much larger fraction of the heritable component of the trait. A disadvantage is that this “black box” modelling approach does not provide any insight into the biological mechanisms underlying the...

  10. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    2013-01-01

    variants influence complex diseases. Despite the successes, the variants identified as being statistically significant have generally explained only a small fraction of the heritable component of the trait, the so-called problem of missing heritability. Insufficient modelling of the underlying genetic...... that the associated genetic variants are enriched for genes that are connected in biol ogical pathways or for likely functional effects on genes. These biological findings provide valuable insight for developing better genomic models. These are statistical models for predicting complex trait phenotypes...... on the basis of single nucleotide polymorphism (SNP) data and trait phenotypes and can account for a much larger fraction of the heritable component of the trait. A disadvantage is that this “black box” modelling approach does not provide any insight into the biological mechanisms underlying the...

  11. Asymptotic behavior of the variance of the EWMA statistic for autoregressive processes

    OpenAIRE

    Vermaat, T.M.B.; Meulen, van der, N.; Does, R.J.M.M.

    2010-01-01

    Asymptotic behavior of the variance of the EWMA statistic for autoregressive processes correspondance: Corresponding author. Tel.: +31 20 5255203; fax: +31 20 5255101. (Vermaat, M.B.) (Vermaat, M.B.) Institute for Business and Industrial Statistics of the University of Amsterdam--> , IBIS UvA--> - NETHERLANDS (Vermaat, M.B.) Institute for Business and Industrial Statistics of the University of Amst...

  12. Time Variability of Quasars: the Structure Function Variance

    CERN Document Server

    MacLeod, C; De Vries, W; Sesar, B; Becker, A

    2008-01-01

    Significant progress in the description of quasar variability has been recently made by employing SDSS and POSS data. Common to most studies is a fundamental assumption that photometric observations at two epochs for a large number of quasars will reveal the same statistical properties as well-sampled light curves for individual objects. We critically test this assumption using light curves for a sample of $\\sim$2,600 spectroscopically confirmed quasars observed about 50 times on average over 8 years by the SDSS stripe 82 survey. We find that the dependence of the mean structure function computed for individual quasars on luminosity, rest-frame wavelength and time is qualitatively and quantitatively similar to the behavior of the structure function derived from two-epoch observations of a much larger sample. We also reproduce the result that the variability properties of radio and X-ray selected subsamples are different. However, the scatter of the variability structure function for fixed values of luminosity...

  13. Image denoising via Bayesian estimation of local variance with Maxwell density prior

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-10-01

    The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.

  14. Examination of Academic Self-Regulation Variances in Nursing Students

    Science.gov (United States)

    Schutt, Michelle A.

    2009-01-01

    Multiple workforce demands in healthcare have placed a tremendous amount of pressure on academic nurse educators to increase the number of professional nursing graduates to provide nursing care both in both acute and non-acute healthcare settings. Increased enrollment in nursing programs throughout the United States is occurring; however, due to…

  15. The Nucleare pole in Burgundy, or the art of variance

    International Nuclear Information System (INIS)

    The relatively atypical position of the nuclear competitiveness pole of Burgundy (France) apparently ensues from the doctrine adopted when poles were created in 2005: contrarily to other poles that have been launched by the government, this one was created on the initiative of contractors working for the nuclear sector. The nuclear industry is deeply implanted in Burgundy where it inherited a long tradition of heavy industry and heavy forging. Even the local authorities were of little support because they were not fully aware that the common point of most local business was to work in the nuclear sector. The mission of this pole follows 4 axis: 1) building and promoting adequate training in nuclear activities at the region scale, 2) proposing coordinated research and development projects for the members, 3) a specific project on a shared information system dedicated to ease relationships between contractors and subcontractors, and 4) a mission for promoting the industrial side of the region in foreign countries. (A.C.)

  16. Diagnosis of Bearing System using Minimum Variance Cepstrum

    International Nuclear Information System (INIS)

    Various bearings are commonly used in rotating machines. The noise and vibration signals that can be obtained from the machines often convey the information of faults and these locations. Monitoring conditions for bearings have received considerable attention for many years, because the majority of problems in rotating machines are caused by faulty bearings. Thus failure alarm for the bearing system is often based on the detection of the onset of localized faults. Many methods are available for detecting faults in the bearing system. The majority of these methods assume that faults in bearings produce impulses. Impulse events can be attributed to bearing faults in the system. McFadden and Smith used the bandpass filter to filter the noise signal and then obtained the envelope by using the envelope detector. D. Ho and R. B Randall also tried envelope spectrum to detect faults in the bearing system, but it is very difficult to find resonant frequency in the noisy environments. S. -K. Lee and P. R. White used improved ANC (adaptive noise cancellation) to find faults. The basic idea of this technique is to remove the noise from the measured vibration signal, but they are not able to show the theoretical foundation of the proposed algorithms. Y.-H. Kim et al. used a moving window. This algorithm is quite powerful in the early detection of faults in a ball bearing system, but it is difficult to decide initial time and step size of the moving window. The early fault signal that is caused by microscopic cracks is commonly embedded in noise. Therefore, the success of detecting fault signal is completely determined by a method's ability to distinguish signal and noise. In 1969, Capon coined maximum likelihood (ML) spectra which estimate a mixed spectrum consisting of line spectrum, corresponding to a deterministic random process, plus arbitrary unknown continuous spectrum. The unique feature of these spectra is that it can detect sinusoidal signal from noise. Our idea

  17. A Generalized Levene's Scale Test for Variance Heterogeneity in the Presence of Sample Correlation and Group Uncertainty

    OpenAIRE

    Soave, David; Sun, Lei

    2016-01-01

    We generalize Levene's test for variance (scale) heterogeneity between $k$ groups for more complex data, which includes sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic $\\chi^2_{k-1}/(k-1)$ distribution of the generalized scale ($gS$) test statistic. We then show that the proposed $gS$ test is independent of the generalized locati...

  18. A low variance consistent test of relative dependency

    OpenAIRE

    Bounliphone, Wacha; Gretton, Arthur; Tenenhaus, Arthur; Blaschko, Matthew

    2014-01-01

    We describe a novel non-parametric statistical hypothesis test of relative dependence between a source variable and two candidate target variables. Such a test enables us to determine whether one source variable is significantly more dependent on a first target variable or a second. Dependence is measured via the Hilbert-Schmidt Independence Criterion (HSIC), resulting in a pair of empirical dependence measures (source-target 1, source-target 2). We test whether the first dependence measure i...

  19. On the Use of Cognitive Maps to Identify Meaning Variance

    OpenAIRE

    Arduin, Pierre-Emmanuel

    2014-01-01

    Cognitive science, as well as psychology, considers that individuals use internal representations of the external reality in order to interact with the world. These representations are called mental models and are considered as a cognitive structure at the basis of reasoning, decision making, and behavior. This paper relies on a fieldwork realized as closely as possible from the respondents. We propose an approach based on graph theory in order to study the meanings given by several people to...

  20. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  1. EMPIRICAL COMPARISON OF VARIOUS APPROXIMATE ESTIMATORS OF THE VARIANCE OF HORVITZ THOMPSON ESTIMATOR UNDER SPLIT METHOD OF SAMPLING

    Directory of Open Access Journals (Sweden)

    Neeraj Tiwari

    2014-06-01

    Full Text Available Under inclusion probability proportional to size (IPPS sampling, the exact secondorder inclusion probabilities are often very difficult to obtain, and hence variance of the Horvitz- Thompson estimator and Sen-Yates-Grundy estimate of the variance of Horvitz-Thompson estimator are difficult to compute. Hence the researchers developed some alternative variance estimators based on approximations of the second-order inclusion probabilities in terms of the first order inclusion probabilities. We have numerically compared the performance of the various alternative approximate variance estimators using the split method of sample selection

  2. Quantum variance: A measure of quantum coherence and quantum correlations for many-body systems

    Science.gov (United States)

    Frérot, Irénée; Roscilde, Tommaso

    2016-08-01

    Quantum coherence is a fundamental common trait of quantum phenomena, from the interference of matter waves to quantum degeneracy of identical particles. Despite its importance, estimating and measuring quantum coherence in generic, mixed many-body quantum states remains a formidable challenge, with fundamental implications in areas as broad as quantum condensed matter, quantum information, quantum metrology, and quantum biology. Here, we provide a quantitative definition of the variance of quantum coherent fluctuations (the quantum variance) of any observable on generic quantum states. The quantum variance generalizes the concept of thermal de Broglie wavelength (for the position of a free quantum particle) to the space of eigenvalues of any observable, quantifying the degree of coherent delocalization in that space. The quantum variance is generically measurable and computable as the difference between the static fluctuations and the static susceptibility of the observable; despite its simplicity, it is found to provide a tight lower bound to most widely accepted estimators of "quantumness" of observables (both as a feature as well as a resource), such as the Wigner-Yanase skew information and the quantum Fisher information. When considering bipartite fluctuations in an extended quantum system, the quantum variance expresses genuine quantum correlations among the two parts. In the case of many-body systems, it is found to obey an area law at finite temperature, extending therefore area laws of entanglement and quantum fluctuations of pure states to the mixed-state context. Hence the quantum variance paves the way to the measurement of macroscopic quantum coherence and quantum correlations in most complex quantum systems.

  3. Maximum Posterior Adjustment of Extended Network and Estimation Formulae of Its Variance Components

    Institute of Scientific and Technical Information of China (English)

    汪善根

    2001-01-01

    This paper derives the maximum posterior adjustment formulae of the extended network and the estimation formulaes of variance components of Helmert, Welsch and Frstner types when there are two types of uncorrelated observations in it, and perfects the theory of the maximum posterior adjustment.

  4. Mean-Variance Efficiency of the Market Portfolio

    Directory of Open Access Journals (Sweden)

    Rafael Falcão Noda

    2014-06-01

    Full Text Available The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i the efficient frontier intersects with the market portfolio and (ii the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the adjusted parameters are not significantly different from the sample parameters, in line with the results of Levy and Roll (2010 for the USA stock market. Such results suggest that the imprecisions in the implementation of the CAPM stem mostly from parameter estimation errors and that other explanatory factors for returns may have low relevance. Therefore, our results contradict the above-mentioned criticisms to the CAPM in Brazil.

  5. Measuring Diversity of University Enrollments: The Generalized Variance Approach

    Science.gov (United States)

    Adwere-Boamah, Joseph

    2013-01-01

    Colleges and universities increasingly voice a concern for, and dedicate institutional attention to ethnic "diversity" within their student population, a trend that aligns with broader concerns about social (in-)equity in US Elementary and Secondary schools. Current operational definition of diversity focuses on demographic variety of…

  6. Variance-reduced particle simulation of the Boltzmann transport equation in the relaxation-time approximation.

    Science.gov (United States)

    Radtke, Gregg A; Hadjiconstantinou, Nicolas G

    2009-05-01

    We present an efficient variance-reduced particle simulation technique for solving the linearized Boltzmann transport equation in the relaxation-time approximation used for phonon, electron, and radiative transport, as well as for kinetic gas flows. The variance reduction is achieved by simulating only the deviation from equilibrium. We show that in the limit of small deviation from equilibrium of interest here, the proposed formulation achieves low relative statistical uncertainty that is also independent of the magnitude of the deviation from equilibrium, in stark contrast to standard particle simulation methods. Our results demonstrate that a space-dependent equilibrium distribution improves the variance reduction achieved, especially in the collision-dominated regime where local equilibrium conditions prevail. We also show that by exploiting the physics of relaxation to equilibrium inherent in the relaxation-time approximation, a very simple collision algorithm with a clear physical interpretation can be formulated. PMID:19518597

  7. Altered temporal variance and neural synchronization of spontaneous brain activity in anesthesia.

    Science.gov (United States)

    Huang, Zirui; Wang, Zhiyao; Zhang, Jianfeng; Dai, Rui; Wu, Jinsong; Li, Yuan; Liang, Weimin; Mao, Ying; Yang, Zhong; Holland, Giles; Zhang, Jun; Northoff, Georg

    2014-11-01

    Recent studies at the cellular and regional levels have pointed out the multifaceted importance of neural synchronization and temporal variance of neural activity. For example, neural synchronization and temporal variance has been shown by us to be altered in patients in the vegetative state (VS). This finding nonetheless leaves open the question of whether these abnormalities are specific to VS or rather more generally related to the absence of consciousness. The aim of our study was to investigate the changes of inter- and intra-regional neural synchronization and temporal variance of resting state activity in anesthetic-induced unconsciousness state. Applying an intra-subject design, we compared resting state activity in functional magnetic resonance imaging (fMRI) between awake versus anesthetized states in the same subjects. Replicating previous studies, we observed reduced functional connectivity within the default mode network (DMN) and thalamocortical network in the anesthetized state. Importantly, intra-regional synchronization as measured by regional homogeneity (ReHo) and temporal variance as measured by standard deviation (SD) of the BOLD signal were significantly reduced in especially the cortical midline regions, while increased in the lateral cortical areas in the anesthetized state. We further found significant frequency-dependent effects of SD in the thalamus, which showed abnormally high SD in Slow-5 (0.01-0.027 Hz) in the anesthetized state. Our results show for the first time of altered temporal variance of resting state activity in anesthesia. Combined with our findings in the vegetative state, these findings suggest a close relationship between temporal variance, neural synchronization and consciousness. PMID:24867379

  8. Explicit Representation of the Minimal Variance Portfolio in Markets driven by Lévy Processes.

    OpenAIRE

    2001-01-01

    In a market driven by a Lévy martingale, we consider a claim x. We study the problem of minimal variance hedging and we give an explicit formula for the minimal variance portfolio in terms of Malliavin derivatives. We discuss two types of stochastic (Malliavin) derivatives for x: one based on the chaos expansion in terms of iterated integrals with respect to the power jump processes and one based on the chaos expansion in terms of iterated integrals with respect to the Wiener process and the ...

  9. LOCALLY RISK-NEUTRAL VALUATION OF OPTIONS IN GARCH MODELS BASED ON VARIANCE-GAMMA PROCESS

    OpenAIRE

    LIE-JANE KAO

    2012-01-01

    This study develops a GARCH-type model, i.e., the variance-gamma GARCH (VG GARCH) model, based on the two major strands of option pricing literature. The first strand of the literature uses the variance-gamma process, a time-changed Brownian motion, to model the underlying asset price process such that the possible skewness and excess kurtosis on the distributions of asset returns are considered. The second strand of the literature considers the propagation of the previously arrived news by i...

  10. 77 FR 73632 - American Municipal Power, Inc; Notice of Application for Temporary Variance of License and...

    Science.gov (United States)

    2012-12-11

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission American Municipal Power, Inc; Notice of Application for Temporary Variance of License and Soliciting Comments, Motions To Intervene, and Protests Take notice that the...

  11. 低咖啡碱茶树遗传群体的咖啡碱含量与分子变异分析%Analysis of Caffeine Content and Molecular Variance of Low-caffeine Tea Plants

    Institute of Scientific and Technical Information of China (English)

    王雪敏; 姚明哲; 金基强; 马春雷; 陈亮

    2012-01-01

    对60个低咖啡碱单株进行咖啡碱含量的HPLC测定和42对EST-SSR引物的分子变异分析.结果表明茶叶中咖啡碱鲜重的变化范围是0.38%~1.08%,低于亲本咖啡碱含量的单株有5份,符合低咖啡碱茶筛选的目标.42对EST-SSR引物在遗传群体中共检测出129个等位基因,每对引物可检测出2~7个等位基因,平均3.86个,平均Shannon-Weaver指数(I)为0.65;标记的多态性信息含量(PIC)平均为0.33,变化范围是0.03~0.68.初步鉴定出3个与咖啡碱变异相关联的分子标记,对低咖啡碱优异基因的筛选及低咖啡碱茶树新品种的选育具有一定的指导意义.%The caffeine content of 60 individuals of a low-caffeine population of tea plant was analyzed using HPLC and the molecular variance was studied using 42 EST-SSR markers, respectively. The results showed that the caffeine content of fresh weight ranged from 0.38% to 1.08%. Five individuals had lower caffeine content than female parent, and they can be used as breeding materials for further screening low-caffeine tea cultivars. One hundred and twenty-nine alleles were detected, each pair of primers could detect 2 to 7 alleles, an average of 3.86. The average number of Shannon-Weaver index (7) was 0.65. The polymorphism information content (PIC) of EST-SSR markers varied from 0.03 to 0.68, with average of 0.33. Three SSR markers, TM089, TM200 and TM211, related to variation of caffeine content were preliminary identified. It would be of important significance for screening excellent genetic resources and breeding new low-caffeine tea cultivars.

  12. An empirical study of statistical properties of variance partition coefficients for multi-level logistic regression models

    Science.gov (United States)

    Li, J.; Gray, B.R.; Bates, D.M.

    2008-01-01

    Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.

  13. The Evolution of Human Intelligence and the Coefficient of Additive Genetic Variance in Human Brain Size

    Science.gov (United States)

    Miller, Geoffrey F.; Penke, Lars

    2007-01-01

    Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…

  14. DNS of channel flow with conjugate heat transfer - Budgets of turbulent heat fluxes and temperature variance

    OpenAIRE

    Flageul Cédric, Benhamadouche Sofiane, Lamballais Éric, Laurence Dominique.

    2014-01-01

    The present work provides budgets of turbulent heat fluxes and temperature variance for a channel flow with different thermal boundary conditions: an imposed temperature, an imposed heat flux and with conjugate heat transfer combined with an imposed heat flux at the outer wall.

  15. Evaluation of the capability of the Lombard test in detecting abrupt changes in variance

    Science.gov (United States)

    Nayak, Munir A.; Villarini, Gabriele

    2016-03-01

    Hydrologic time series are often characterized by temporal changes that give rise to non-stationarity. When the distribution describing the data changes over time, it is important to detect these changes so that correct inferences can be drawn from the data. The Lombard test, a non-parametric rank-based test to detect change points in the moments of a time series, has been recently used in the hydrologic literature to detect change points in the mean and variance. Little is known, however, about the performance of this test in detecting changes in variance, despite the potentially large impacts that these changes (shifts) could have when dealing with extremes. Here we address this issue in a Monte Carlo simulation framework. We consider a number of different situations that can manifest themselves in hydrologic time series, including the dependence of the results on the magnitude of the shift, significance level, sample size and location of the change point within the series. Analyses are performed considering abrupt changes in variance occurring with and without shifts in the mean. The results show that the power of the test in detecting change points in variance is small when the changes are small. It is large when the change point occurs close to the middle of the time series, and it increases nonlinearly with increasing sample size. Moreover, the power of the test is greatly reduced by the presence of change points in mean. We propose removing the change in the mean before testing for change points in variance. Simulation results demonstrate that this strategy effectively increases the power of the test. Finally, the Lombard test is applied to annual peak discharge records from 3686 U.S. Geological Survey stream-gaging stations across the conterminous United States, and the results are discussed in light of the insights from the simulations' results.

  16. The efficiency of the crude oil markets: Evidence from variance ratio tests

    OpenAIRE

    Charles, Amélie; Darné, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multip...

  17. The Variance of Discounted Rewards in Markov Decision Processes: Laurent Expansion and Sensitive Optimality

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Olomouc : Palacký University, Olomouc, 2014, s. 908-913. ISBN 978-80-244-4209-9. [MME 2014. International Conference Mathematical Methods in Economics /32./. Olomouc (CZ), 10.09.2014-12.09.2014] R&D Projects: GA ČR GA13-14445S Grant ostatní: CONACYT(MX) 171396 Institutional support: RVO:67985556 Keywords : discrete-time Markov decision chains * variance of total discounted rewards * Laurent expansion * mean-variance optimality Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/E/sladky-0432654.pdf

  18. The impact of the variance of online consumer ratings on pricing and demand – An analytical model

    OpenAIRE

    Philipp Herrmann

    2014-01-01

    It is well known that consumer ratings play a major role in the purchase decisions of online shoppers. To examine the effect of the variance of these ratings on future product pricing and sales we propose an analytical model which considers products where the variance of consumer ratings results from two types of product attributes: observational search attributes and experience attributes. We find that if a higher variance is caused by an observational search attribute it results in a higher...

  19. The modified Black-Scholes model via constant elasticity of variance for stock options valuation

    Science.gov (United States)

    Edeki, S. O.; Owoloko, E. A.; Ugbebor, O. O.

    2016-02-01

    In this paper, the classical Black-Scholes option pricing model is visited. We present a modified version of the Black-Scholes model via the application of the constant elasticity of variance model (CEVM); in this case, the volatility of the stock price is shown to be a non-constant function unlike the assumption of the classical Black-Scholes model.

  20. Genetic Variance in Nonverbal Intelligence: Data from the Kinships of Identical Twins.

    Science.gov (United States)

    Rose, Richard J.; And Others

    1979-01-01

    Data are presented from families of monozygotic twin pairs which give evidence of genetic variance on the Block Design Test, a nonverbal measure of intelligence. Analyses of genetic and environmental effects on behavior are possible with this kind of information. (SA)