#### Sample records for analysis of variance

1. Naive Analysis of Variance

Braun, W. John

2012-01-01

The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

2. Fixed effects analysis of variance

Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

1978-01-01

Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

3. One-way analysis of variance with unequal variances.

Rice, W R; Gaines, S. D.

1989-01-01

We have designed a statistical test that eliminates the assumption of equal group variances from one-way analysis of variance. This test is preferable to the standard technique of trial-and-error transformation and can be shown to be an extension of the Behrens-Fisher T test to the case of three or more means. We suggest that this procedure be used in most applications where the one-way analysis of variance has traditionally been applied to biological data.

4. Analysis of Variance: Variably Complex

Drummond, Gordon B.; Vowler, Sarah L.

2012-01-01

These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

5. An alternative analysis of variance

Longford, Nicholas T.

2008-01-01

The one-way analysis of variance is a staple of elementary statistics courses. The hypothesis test of homogeneity of the means encourages the use of the selected-model based estimators which are usually assessed without any regard for the uncertainty about the outcome of the test. We expose the weaknesses of such estimators when the uncertainty is taken into account, as it should be, and propose synthetic estimators as an alternative.

6. Generalized analysis of molecular variance.

Caroline M Nievergelt

2007-04-01

Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

7. Analysis of variance for model output

Jansen, M.J.W.

1999-01-01

A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va

8. A Mean variance analysis of arbitrage portfolios

Fang, Shuhong

2007-03-01

Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

9. Fundamentals of exploratory analysis of variance

Hoaglin, David C; Tukey, John W

2009-01-01

The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

10. Formative Use of Intuitive Analysis of Variance

Trumpower, David L.

2013-01-01

Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

11. Uses and abuses of analysis of variance.

Evans, S. J.

1983-01-01

Analysis of variance is a term often quoted to explain the analysis of data in experiments and clinical trials. The relevance of its methodology to clinical trials is shown and an explanation of the principles of the technique is given. The assumptions necessary are examined and the problems caused by their violation are discussed. The dangers of misuse are given with some suggestions for alternative approaches.

12. Multivariate Analysis of Variance Using Spatial Ranks

KYUNGMEE CHOI; JOHN MARDEN

2002-01-01

The authors consider multivariate analysis of variance procedures based on the multivariate spatial ranks. Two models are considered: the location-family model and the fully nonparametric model. Procedures for testing main and interaction effects are given for the 2 × 2 layout.

13. Directional variance analysis of annual rings

Kumpulainen, P.; Marjanen, K.

2010-07-01

The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

14. Analysis of variance of microarray data.

Ayroles, Julien F; Gibson, Greg

2006-01-01

Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792

15. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

16. The Importance of Variance Analysis for Costs Control in Organizations

Okoh, L. O.; Uzoka, P.

2012-01-01

This review aimed at examining the importance of variance analysis for cost control in organizations. The study x-rayed the concept of variance analysis, types, sources, objectives and its significance. The study reported that variance analysis has significant influence in evaluating individual performance in organizations, assignment of responsibilities to individuals and assisting management to rely on the principle of management by exception and recommended among others, variances analysis...

17. An Approximation of the Minimum-Variance Estimator of Heritability Based on Variance Component Analysis

Grossman, M.; Norton, H W

1981-01-01

An approximate minimum-variance estimate of heritability (h2) is proposed, using the sire and dam components of variance from a hierarchical analysis of variance. The minimum sampling variance is derived for unbalanced data. Optimum structures for the estimation of h2 are given for the balanced case. The degree to which ĥ2 is more precise than the equally weighted estimate ĥ2S+D is a function of the size and structure of the sample used. However, computer simulation reveals that ĥ2 has less d...

18. Power Estimation in Multivariate Analysis of Variance

Jean François Allaire

2007-09-01

Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

19. Applications of non-parametric statistics and analysis of variance on sample variances

Myers, R. H.

1981-01-01

Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

20. Variance analysis. Part II, The use of computers.

Finkler, S A

1991-09-01

This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788

1. DISCO analysis: A nonparametric extension of analysis of variance

RIZZO, MARIA L.; Székely, Gábor J.

2010-01-01

In classical analysis of variance, dispersion is measured by considering squared distances of sample elements from the sample mean. We consider a measure of dispersion for univariate or multivariate response based on all pairwise distances between-sample elements, and derive an analogous distance components (DISCO) decomposition for powers of distance in \$(0,2]\$. The ANOVA F statistic is obtained when the index (exponent) is 2. For each index in \$(0,2)\$, this decomposition determines a nonpar...

2. Wavelet Variance Analysis of EEG Based on Window Function

ZHENG Yuan-zhuang; YOU Rong-yi

2014-01-01

A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.

3. Analysis of Variance: What Is Your Statistical Software Actually Doing?

Li, Jian; Lomax, Richard G.

2011-01-01

Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

4. Levine's guide to SPSS for analysis of variance

Braver, Sanford L; Page, Melanie

2003-01-01

A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

5. Estimation of Variance Components in the Mixed-Effects Models: A Comparison Between Analysis of Variance and Spectral Decomposition

Wu, Mi-Xia; Yu, Kai-Fun; Liu, Ai-Yi

2009-01-01

The mixed-effects models with two variance components are often used to analyze longitudinal data. For these models, we compare two approaches to estimating the variance components, the analysis of variance approach and the spectral decomposition approach. We establish a necessary and sufficient condition for the two approaches to yield identical estimates, and some sufficient conditions for the superiority of one approach over the other, under the mean squared error criterion. Applications o...

6. Budget variance analysis using RVUs.

Berlin, M F; Budzynski, M R

1998-01-01

This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247

7. Statistics review 9: One-way analysis of variance

Bewick, Viv; Cheek, Liz; Ball, Jonathan

2004-01-01

This review introduces one-way analysis of variance, which is a method of testing differences between more than two groups or treatments. Multiple comparison procedures and orthogonal contrasts are described as methods for identifying specific differences between pairs of treatments.

8. Batch variation between branchial cell cultures: An analysis of variance

Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

2003-01-01

We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed to be...

9. Intuitive Analysis of Variance-- A Formative Assessment Approach

Trumpower, David

2013-01-01

This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

10. Analysis of Variance in the Modern Design of Experiments

Deloach, Richard

2010-01-01

This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

11. Analysis of variance of thematic mapping experiment data.

Rosenfield, G.H.

1981-01-01

As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author

12. Variance Analysis of Genus Ipomoea based on Morphological Characters

DWI PRIYANTO

2000-07-01

Full Text Available The objective of this research was to find out the variability of morphological characters of genus Ipomoea, including coefficient variance and phylogenetic relationships. Genus Ipomoea has been identified consisting of four species i.e. Ipomoea crassicaulis Rob, Ipomoea aquatica Forsk., Ipomoea reptans Poir and Ipomoea leari. Four species of the genus took from surround the lake inside the campus of Sebelas Maret University, Surakarta. Comparison of species variability was based on the variance coefficient of vegetative and generative morphological characters. The vegetative characters observed were roots, steams and leaves, while the generative characters observed were flowers, fruits, and seeds. Phylogenetic relationship was determined by clustering association coefficient. Coefficient variance analysis of vegetative and generative morphological characters resulted in several groups based on the degree of variability i.e. low, enough, high, very high or none. The phylogenetic relationship showed that Ipomoea aquatica Forsk. and Ipomoea reptans Poir. have higher degree of phylogenetic than Ipomoea leari and Ipomoea crassicaulis Rob.

13. A guide to SPSS for analysis of variance

Levine, Gustav

2013-01-01

This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

14. Analysis of variance tables based on experimental structure.

Brien, C J

1983-03-01

A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362

15. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

Perlmutter, M.

1972-01-01

The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

16. Analysis of variance of an underdetermined geodetic displacement problem

Darby, D.

1982-06-01

It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

17. The use of analysis of variance procedures in biological studies

Williams, B.K.

1987-01-01

The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

18. Variance estimation in the analysis of microarray data

Wang, Yuedong

2009-04-01

Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

19. Confidence Intervals for the Between Group Variance in the Unbalanced One-Way Random Effects Model of Analysis of Variance

Hartung, Joachim; Knapp, Guido

2000-01-01

A confidence interval for the between group variance is proposed which is deduced from Wald’s exact confidence interval for the ratio of the two variance components in the one-way random effects model and the exact confidence interval for the error variance resp. an unbiased estimator of the error variance. In a simulation study the confidence coefficients for these two intervals are compared with the confidence coefficients of two other commonly used confidence intervals. There, the confiden...

20. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

Luh, Wei-Ming; Guo, Jiin-Huarng

2011-01-01

Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

1. Estimation of measurement variances

The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

2. Variance analysis. Part I, Extending flexible budget variance analysis to acuity.

Finkler, S A

1991-01-01

The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002

3. Gravity interpretation of dipping faults using the variance analysis method

A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

4. Variance Analysis of Genus Ipomoea based on Morphological Characters

DWI PRIYANTO; SURATMAN; AHMAD DWI SETYAWAN

2000-01-01

The objective of this research was to find out the variability of morphological characters of genus Ipomoea, including coefficient variance and phylogenetic relationships. Genus Ipomoea has been identified consisting of four species i.e. Ipomoea crassicaulis Rob, Ipomoea aquatica Forsk., Ipomoea reptans Poir and Ipomoea leari. Four species of the genus took from surround the lake inside the campus of Sebelas Maret University, Surakarta. Comparison of species variability was based on the varia...

5. Things that make us different: analysis of variance in the use of time

Jorge González-Chapela

2010-01-01

The bounded character of time-use data poses a challenge to the analysis of variance based on classical linear models. This paper investigates a computationally simple variance decomposition technique suitable for these data. As a by-product of the analysis, a measure of fit for systems of time-demand equations that possesses several useful properties is proposed.

6. Analysis and application of minimum variance discrete linear system identification

Kotob, S.; Kaufman, H.

1977-01-01

An on-line minimum variance (MV) parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise (AMN). The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean-square convergent and mean-square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

7. Analysis and application of minimum variance discrete time system identification

Kotob, S.; Kaufman, H.

1976-01-01

An on-line minimum variance parameter identifier was developed which embodies both accuracy and computational efficiency. The new formulation resulted in a linear estimation problem with both additive and multiplicative noise. The resulting filter is shown to utilize both the covariance of the parameter vector itself and the covariance of the error in identification. It is proven that the identification filter is mean square covergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

8. Spectral variance of aeroacoustic data

Rao, K. V.; Preisser, J. S.

1981-01-01

An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.

9. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

Attali, Yigal

2010-01-01

Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

10. Wavelet multiscale analysis of a power system load variance

Avdakovic, Samir; Nuhanovic, Amir; Kusljugic, Mirza

2013-01-01

Wavelet transform (WT) represents a very attractive mathematical area for just more than 15 years of its research in applications in electrical engineering. This is mainly due to its advantages over other processing techniques and signal analysis, which is reflected in the time-frequency analysis, and so it has an important application in the processing and analysis of time series. In this paper, for example, the analysis of the hourly load of a real power system over the past few yea...

11. Structure analysis of interstellar clouds: II. Applying the Delta-variance method to interstellar turbulence

Ossenkopf, V; Stutzki, J

2008-01-01

The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. In paper I we proposed essential improvements to the Delta-variance analysis. In this paper we apply the improved Delta-variance analysis to i) a hydrodynamic turbulence simulation with prominent density and velocity structures, ii) an observed intensity map of rho Oph with irregular boundaries and variable uncertainties of the different data points, and iii) a map of the turbulent velocity structure in the Polaris Flare affected by the intensity dependence on the centroid velocity determination. The tests confirm the extended capabilities of the improved Delta-variance analysis. Prominent spatial scales were accurately identified and artifacts from a variable reliability of the data were removed. The analysis of the hydrodynamic simulations showed that the injection of a turbulent velocity structure creates the most prominent density structures are produced on a sca...

12. Admissibility of Invariant Tests in the General Multivariate Analysis of Variance Problem

Marden, John I.

1983-01-01

Necessary and sufficient conditions for an invariant test to be admissible among invariant tests in the general multivariate analysis of variance problem are presented. It is shown that in many cases the popular tests based on the likelihood ratio matrix are inadmissible. Other tests are shown admissible. Numerical work suggests that the inadmissibility of the likelihood ratio test is not serious. The results are given for the multivariate analysis of variance problem as a special case.

13. Variance heterogeneity analysis for detection of potentially interacting genetic loci: method and its limitations

van Duijn Cornelia

2010-10-01

Full Text Available Abstract Background Presence of interaction between a genotype and certain factor in determination of a trait's value, it is expected that the trait's variance is increased in the group of subjects having this genotype. Thus, test of heterogeneity of variances can be used as a test to screen for potentially interacting single-nucleotide polymorphisms (SNPs. In this work, we evaluated statistical properties of variance heterogeneity analysis in respect to the detection of potentially interacting SNPs in a case when an interaction variable is unknown. Results Through simulations, we investigated type I error for Bartlett's test, Bartlett's test with prior rank transformation of a trait to normality, and Levene's test for different genetic models. Additionally, we derived an analytical expression for power estimation. We showed that Bartlett's test has acceptable type I error in the case of trait following a normal distribution, whereas Levene's test kept nominal Type I error under all scenarios investigated. For the power of variance homogeneity test, we showed (as opposed to the power of direct test which uses information about known interacting factor that, given the same interaction effect, the power can vary widely depending on the non-estimable direct effect of the unobserved interacting variable. Thus, for a given interaction effect, only very wide limits of power of the variance homogeneity test can be estimated. Also we applied Levene's approach to test genome-wide homogeneity of variances of the C-reactive protein in the Rotterdam Study population (n = 5959. In this analysis, we replicate previous results of Pare and colleagues (2010 for the SNP rs12753193 (n = 21, 799. Conclusions Screening for differences in variances among genotypes of a SNP is a promising approach as a number of biologically interesting models may lead to the heterogeneity of variances. However, it should be kept in mind that the absence of variance heterogeneity for

14. An application of the analysis of variance of measures repeated in an experiment with heavy metals

A revision of some basic concepts related to the analysis of variance of repeated measures is presented within an ecological context topics such as the types of experiments in which the technique is applicable, the hypotheses of interest, and its preference over other traditional techniques such as regression and conventional analysis of variance, are discussed. As an example, the technique was successfully applied to an experiment carried out at Cienaga Grande de Santa Marta, Colombia, in which the concentration of cadmium μg/g in leaves of the black mangrove Avicennia germinans was measured in several monitoring stations and throughout several sampling intervals representing seasons

15. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

2011-01-01

of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in...... the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an...... analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be...

16. Analysis of Variance with Summary Statistics in Microsoft® Excel®

Larson, David A.; Hsu, Ko-Cheng

2010-01-01

Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

17. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

Finch, W. Holmes

2016-01-01

Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

18. Gender variance on campus : a critical analysis of transgender voices

Mintz, Lee M.

2011-01-01

Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn, 2005; Beemyn, Curtis, Davis, & Tubbs, 2005). This study examined the perceptions of transgender inclusion, ways in which leadership structures or entiti...

19. Variance Analysis of Unevenly Spaced Time Series Data

Hackman, Christine; Parker, Thomas E.

1996-01-01

We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.

20. Gender Variance on Campus: A Critical Analysis of Transgender Voices

Mintz, Lee M.

2011-01-01

Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

1. Analysis of variance and functional measurement a practical guide

Weiss, David J

2006-01-01

Chapter I. IntroductionChapter II. One-way ANOVAChapter III. Using the ComputerChapter IV. Factorial StructureChapter V. Two-way ANOVA Chapter VI. Multi-factor DesignsChapter VII. Error Purifying DesignsChapter VIII. Specific ComparisonsChapter IX. Measurement IssuesChapter X. Strength of Effect**Chapter XI. Nested Designs**Chapter XII. Missing Data**Chapter XIII. Confounded Designs**Chapter XIV. Introduction to Functional Measurement**Terms from Introductory Statistics References Subject Index Name Index

2. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

Schwartz, Todd A.

2013-01-01

I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

3. A Demonstration of the Analysis of Variance Using Physical Movement and Space

Owen, William J.; Siakaluk, Paul D.

2011-01-01

Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

4. Smoothing spline analysis of variance approach for global sensitivity analysis of computer codes

The paper investigates a nonparametric regression method based on smoothing spline analysis of variance (ANOVA) approach to address the problem of global sensitivity analysis (GSA) of complex and computationally demanding computer codes. The two steps algorithm of this method involves an estimation procedure and a variable selection. The latter can become computationally demanding when dealing with high dimensional problems. Thus, we proposed a new algorithm based on Landweber iterations. Using the fact that the considered regression method is based on ANOVA decomposition, we introduced a new direct method for computing sensitivity indices. Numerical tests performed on several analytical examples and on an application from petroleum reservoir engineering showed that the method gives competitive results compared to a more standard Gaussian process approach

5. Application of the analysis of variance for the determination of reinforcement structure homogeneity in MMC

K. Gawdzińska; S. Berczyński; M. Pelczar; J. Grabian

2010-01-01

These authors propose a new definition of homogeneity verified by variance analysis. The analysis aimed at quantitative variables describing the homogeneity of reinforcement structure, i.e. surface areas, reinforcement phase diameter and percentage of reinforcement area contained in a circle within a given region. The examined composite material consisting of silicon carbide reinforcement particles in AlSi11 alloy matrix was made by mechanical mixing.

6. Variance of volume estimators

Janáček, Jiří

Jena : Friedrich-Schiller-Universität, 2007. s. 23-23. [Workshop on Stochastic Geometry, Stereology and Image Analysis /14./. 23.09.2007-28.09.2007, Neudietendorf] R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : spr2 * stereology * volume * variance Subject RIV: EA - Cell Biology

7. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

Steyn, H. S., Jr.; Ellis, S. M.

2009-01-01

When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

8. A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance

Liu, Xiaofeng Steven

2010-01-01

The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…

9. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists

Warne, Russell T.

2014-01-01

Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…

10. Publishing Nutrition Research: A Review of Multivariate Techniques Part 2: Analysis of Variance.

Harris, Jeffrey E.; Sheean, Patricia M.; Philip M. Gleason; Barbara Bruemmer; Carol Boushey

2012-01-01

This article is the eighth in a series exploring the importance of research design, statistical analysis, and epidemiology in nutrition and dietetics research. It is the second article in that series focused on multivariate statistical analytical techniques. This review examines the statistical technique of analysis of variance (ANOVA), from its simplest form to multivariate applications. It addresses all these applications and includes hypothetical and real examples from the field of dietetics.

11. Toward an objective evaluation of teacher performance: The use of variance partitioning analysis, VPA.

Eduardo R. Alicias

2005-05-01

Full Text Available Evaluation of teacher performance is usually done with the use of ratings made by students, peers, and principals or supervisors, and at times, selfratings made by the teachers themselves. The trouble with this practice is that it is obviously subjective, and vulnerable to what Glass and Martinez call the "politics of teacher evaluation," as well as to professional incapacities of the raters. The value-added analysis (VAA model is one attempt to make evaluation objective and evidenced-based. However, the VAA model'especially that of the Tennessee Value Added Assessment System (TVAAS developed by William Sanders'appears flawed essentially because it posits the untenable assumption that the gain score of students (value added is attributable only and only to the teacher(s, ignoring other significant explanators of student achievement like IQ and socio-economic status. Further, the use of the gain score (value-added as a dependent variable appears hobbled with the validity threat called "statistical regression," as well as the problem of isolating the conflated effects of two or more teachers. The proposed variance partitioning analysis (VPA model seeks to partition the total variance of the dependent variable (post-test student achievement into various portions representing: first, the effects attributable to the set of teacher factors; second, effects attributable to the set of control variables the most important of which are IQ of the student, his pretest score on that particular dependent variable, and some measures of his socio-economic status; and third, the unexplained effects/variance. It is not difficult to see that when the second and third quanta of variance are partitioned out of the total variance of the dependent variable, what remains is that attributable to the teacher. Two measures of teacher effect are hereby proposed: the proportional teacher effect and the direct teacher effect.

12. Analysis of variance of quantitative parameters bidders offers for public procurement in the chosen sector

Gavorníková, Katarína

2012-01-01

Goal of this work was to found out which determinants and in what direction influence variance of price biddings offered by bidders for public procurement, as well as their behavior during selection process. This work aimed on public procurement for construction works declared by municipal procurement authority. Regression analysis confirmed the variable estimated price and ratio of final and estimated price of public procurement as the strongest influences. Increasing estimated price raises ...

13. Application of Rejection Sampling based methodology to variance based parametric sensitivity analysis

For estimating the effect of uncertain distribution parameter on the variance of failure probability function (FPF), the map from distribution parameters to FPF is built and the high efficient approximation form is extended to solve the parametric variance-based sensitivity index. Then the parametric variance-based sensitivity index can be firstly expressed as the moments of the FPF, and the FPF is approximated by a product of the univariate functions of the distribution parameters, on which the moments of the FPF approximated by the univariate functions can be easily evaluated by the Gaussian integration using the values of the FPF at the Gaussian nodes. Thus the primary task of evaluating the parametric variance-based sensitivity is transformed to calculate the FPF at Gaussian nodes of the univariate functions, for which Monte Carlo (MC), Extended Monte Carlo (EMC) and Rejection Sampling (RS) are employed and compared here. Only one set of samples of inputs are needed in either EMC or RS. Several numerical and engineering examples are presented to verify the accuracy and efficiency of the proposed approximate methods. Additionally, the results also reveal the virtue of RS which can be more accurate and more unlimited than EMC. - Highlights: • An efficient approximate form is applied for parametric sensitivity analysis. • Gaussian integration techniques are used to compute the moments of FPF. • Extended Monte Carlo is used to compute the FPF with only one set of samples. • Rejection Sampling is applied to estimate FPF by reusing the original samples

14. Toward a more robust variance-based global sensitivity analysis of model outputs

Tong, C

2007-10-15

Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

15. Structure analysis of interstellar clouds: I. Improving the Delta-variance method

Ossenkopf, V; Stutzki, J

2008-01-01

The Delta-variance analysis, has proven to be an efficient and accurate method of characterising the power spectrum of interstellar turbulence. The implementation presently in use, however, has several shortcomings. We propose and test an improved Delta-variance algorithm for two-dimensional data sets, which is applicable to maps with variable error bars and which can be quickly computed in Fourier space. We calibrate the spatial resolution of the Delta-variance spectra. The new Delta-variance algorithm is based on an appropriate filtering of the data in Fourier space. It allows us to distinguish the influence of variable noise from the actual small-scale structure in the maps and it helps for dealing with the boundary problem in non-periodic and/or irregularly bounded maps. We try several wavelets and test their spatial sensitivity using artificial maps with well known structure sizes. It turns out that different wavelets show different strengths with respect to detecting characteristic structures and spectr...

16. The Efficiency of Split Panel Designs in an Analysis of Variance Model

Wang, Wei-Guo; Liu, Hai-Jun

2016-01-01

We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

17. The Efficiency of Split Panel Designs in an Analysis of Variance Model.

Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun

2016-01-01

We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

18. Spectral Ambiguity of Allan Variance

Greenhall, C. A.

1996-01-01

We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

19. Contrasting regional architectures of schizophrenia and other complex diseases using fast variance components analysis

Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; Grove, Jakob; O’Donovan, Michael C; Neale, Benjamin M; Patterson, Nick; Price, Alkes L

2015-01-01

Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC a...... liabilities. To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale....

20. Structure analysis of simulated molecular clouds with the Δ-variance

Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

2015-07-01

We employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n0 = 30, 100 and 300 cm-3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth-size relation ranging from 0.4 to 0.7 for the total and H2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H2 density by a factor of 1.5-3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of ˜100 cm-3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.

1. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis

Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of ...

2. Comparison of performance between rescaled range analysis and rescaled variance analysis in detecting abrupt dynamic change

何文平; 刘群群; 姜允迪; 卢莹

2015-01-01

In the present paper, a comparison of the performance between moving cutting data-rescaled range analysis (MC-R/S) and moving cutting data-rescaled variance analysis (MC-V/S) is made. The results clearly indicate that the operating efficiency of the MC-R/S algorithm is higher than that of the MC-V/S algorithm. In our numerical test, the computer time consumed by MC-V/S is approximately 25 times that by MC-R/S for an identical window size in artificial data. Except for the difference in operating efficiency, there are no significant differences in performance between MC-R/S and MC-V/S for the abrupt dynamic change detection. MC-R/S and MC-V/S both display some degree of anti-noise ability. However, it is important to consider the influences of strong noise on the detection results of MC-R/S and MC-V/S in practical application processes.

3. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

Peng, Yichuan; Lord, Dominique; Zou, Yajie

2014-12-01

As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model. PMID:25173723

4. THE EFFECTS OF DISAGGREGATED SAVINGS ON ECONOMIC GROWTH IN MALAYSIA - GENERALISED VARIANCE DECOMPOSITION ANALYSIS

Chor Foon Tang; Hooi Hooi Lean

2009-01-01

This study examines how much of the variance in economic growth can be explained by various categories of domestic and foreign savings in Malaysia. The bounds testing approach to cointegration and the generalised forecast error variance decomposition technique was used to achieve the objective of this study. The cointegration test results demonstrate that the relationship between economic growth and savings in Malaysia are stable and coalescing in the long run. The variance decomposition find...

5. Methods and applications of linear models regression and the analysis of variance

Hocking, Ronald R

2013-01-01

Praise for the Second Edition"An essential desktop reference book . . . it should definitely be on your bookshelf." -Technometrics A thoroughly updated book, Methods and Applications of Linear Models: Regression and the Analysis of Variance, Third Edition features innovative approaches to understanding and working with models and theory of linear regression. The Third Edition provides readers with the necessary theoretical concepts, which are presented using intuitive ideas rather than complicated proofs, to describe the inference that is appropriate for the methods being discussed. The book

6. On Mean-Variance Analysis

Yang Li; Pirvu, Traian A

2011-01-01

This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

7. Structure analysis of simulated molecular clouds with the Delta-variance

Bertram, Erik; Glover, Simon C O

2015-01-01

We employ the Delta-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n_0 = 30, 100 and 300 cm^{-3} that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Delta-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 -> 0) lines. The spectral slopes of the Delta-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth-size relation ranging from 0.4 to 0....

8. Mean-Variance-Instability Portfolio Analysis: A Case of Taiwan's Stock Market

Shawin Lee; Kuo-Ping Chang

1995-01-01

This paper applies Talpaz, Harpaz, and Penson's (THP) (Talpaz, H., A. Harpaz, J. B. Penson, Jr. 1983. Risk and spectral instability in portfolio analysis. Eur. J. Oper. Res. 14 262--269.) mean-variance-instability portfolio selection model to eight selected Taiwan stocks during 1980--89 to demonstrate how instability preference affects the traditional mean-variance frontier. In contrast to THP's finding, the empirical results show that Taiwan's high-frequency stocks have high, not low, varian...

9. ELLIPTICAL SYMMETRY, EXPECTED UTILITY, AND MEAN-VARIANCE ANALYSIS

Carl H. NELSON; Ndjeunga, Jupiter

1997-01-01

Mean-variance analysis in the form of risk programming has a long, productive history in agricultural economics research. And risk programming continues to be used despite well known theoretical results that choices based on mean-variance analysis are not consistent with choices based on expected utility maximization. This paper demonstrates that the multivariate distribution of returns used in risk programming must be elliptically symmetric in order for mean-variance analysis to be consisten...

10. Analysis of variance on thickness and electrical conductivity measurements of carbon nanotube thin films

Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard

2016-09-01

One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.

11. Visualization Method for Finding Critical Care Factors in Variance Analysis

YUI, Shuntaro; BITO, Yoshitaka; OBARA, Kiyohiro; KAMIYAMA, Takuya; SETO, Kumiko; Ban, Hideyuki; HASHIZUME, Akihide; HAGA, Masashi; Oka, Yuji

2006-01-01

We present a novel visualization method for finding care factors in variance analysis. The analysis has two stages: first stage enables users to extract a significant variance, and second stage enables users to find out a critical care factors of the variance. The analysis has been validated by using synthetically created inpatient care processes. It was found that the method is efficient in improving clinical pathways.

12. Modeling the Variance of Variance Through a Constant Elasticity of Variance Generalized Autoregressive Conditional Heteroskedasticity Model

Saedi, Mehdi; Wolk, Jared

2012-01-01

This paper compares a standard GARCH model with a Constant Elasticity of Variance GARCH model across three major currency pairs and the S&P 500 index. We discuss the advantages and disadvantages of using a more sophisticated model designed to estimate the variance of variance instead of assuming it to be a linear function of the conditional variance. The current stochastic volatility and GARCH analogues rest upon this linear assumption. We are able to confirm through empirical estimation ...

13. [Discussion of errors and measuring strategies in morphometry using analysis of variance].

Rother, P; Jahn, W; Fitzl, G; Wallmann, T; Walter, U

1986-01-01

Statistical techniques known as the analysis of variance make it possible for the morphologist to plan work in such a way as to get quantitative data with the greatest possible economy of effort. This paper explains how to decide how many measurements to make per micrograph, how many micrographs per tissue block or organ, and how many organs or individuals are necessary for getting an exactness of sufficient quality of the results. The examples furnished have been taken from measuring volume densities of mitochondria in heart muscle cells and from cell counting in lymph nodes. Finally we show, how to determine sample sizes, if we are interested in demonstration of significant differences between mean values. PMID:3569811

14. Analysis of open-loop conical scan pointing error and variance estimators

Alvarez, L. S.

1993-01-01

General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

15. New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces

Sidick, Erkin

2010-01-01

Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).

16. Mean Variance Optimization of Non-Linear Systems and Worst-case Analysis

Parpas, Panos; Rustem, Berc; Wieland, Volker; Zakovic, Stan

2006-01-01

In this paper, we consider expected value, variance and worst-case optimization of nonlinear models. We present algorithms for computing optimal expected values, and variance, based on iterative Taylor expansions. We establish convergence and consider the relative merits of policies beaded on expected value optimization and worst-case robustness. The latter is a minimax strategy and ensures optimal cover in view of the worst-case scenario(s) while the former is optimal expected performance in...

17. On spectral methods for variance based sensitivity analysis

Alexanderian, Alen

2013-01-01

Consider a mathematical model with a finite number of random parameters. Variance based sensitivity analysis provides a framework to characterize the contribution of the individual parameters to the total variance of the model response. We consider the spectral methods for variance based sensitivity analysis which utilize representations of square integrable random variables in a generalized polynomial chaos basis. Taking a measure theoretic point of view, we provide a rigorous and at the sam...

18. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

Harder, R. L.

1974-01-01

The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

19. Minimum variance imaging based on correlation analysis of Lamb wave signals.

Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi

2016-08-01

In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance. PMID:27155349

20. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review

Malkin, Zinovy

2016-01-01

The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing of the frequency standards deviations. For the past decades, AVAR has increasingly being used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with the clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. Besides, some physically connected scalar time series naturally form series of multi-dimensional vectors. For example, three station coordinates time series \$X\$, \$Y\$, and \$Z\$ can be combined to analyze 3D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multi-dimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multi-dimensional AVAR (MAVAR), and weighted multi-dimensional AVAR (WMAVAR), were introduced to overcome these ...

1. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

2. Performance of selected imputation techniques for missing variances in meta-analysis

A common method of handling the problem of missing variances in meta-analysis of continuous response is through imputation. However, the performance of imputation techniques may be influenced by the type of model utilised. In this article, we examine through a simulation study the effects of the techniques of imputation of the missing SDs and type of models used on the overall meta-analysis estimates. The results suggest that imputation should be adopted to estimate the overall effect size, irrespective of the model used. However, the accuracy of the estimates of the corresponding standard error (SE) is influenced by the imputation techniques. For estimates based on the fixed effects model, mean imputation provides better estimates than multiple imputations, while those based on the random effects model responds more robustly to the type of imputation techniques. The results showed that although imputation is good in reducing the bias in point estimates, it is more likely to produce coverage probability which is higher than the nominal value.

3. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

2012-01-01

Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

4. FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons

Fordyce, Michael W.

1977-01-01

A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)

5. Analysis of Quantitative Traits in Two Long-Term Randomly Mated Soybean Populations I. Genetic Variances

The genetic effects of long term random mating and natural selection aided by genetic male sterility were evaluated in two soybean [Glycine max (L.) Merr.] populations: RSII and RSIII. Population means, variances, and heritabilities were estimated to determine the effects of 26 generations of random...

6. The term structure of variance swap rates and optimal variance swap investments

Egloff, Daniel; Leippold, Markus; Liuren WU

2010-01-01

This paper performs specification analysis on the term structure of variance swap rates on the S&P 500 index and studies the optimal investment decision on the variance swaps and the stock index. The analysis identifies two stochastic variance risk factors, which govern the short and long end of the variance swap term structure variation, respectively. The highly negative estimate for the market price of variance risk makes it optimal for an investor to take short positions in a short-term va...

7. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

Jan, Show-Li; Shieh, Gwowen

2014-01-01

The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

8. Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance

Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.

1995-01-01

The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.

9. Odor measurements according to EN 13725: A statistical analysis of variance components

Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko

2014-04-01

In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore

10. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

Malkin, Zinovy

2016-04-01

The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series. PMID:26540681

11. Analysis and application of minimum variance discrete time system identification. [for adaptive control system design

Kotob, S.; Kaufman, H.

1976-01-01

An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

12. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

Luh, Wei-Ming; Guo, Jiin-Huarng

2016-01-01

This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

13. APRIORI: A FORTRAN IV Computer Program to Select the Most Powerful A Priori Comparison Method in an Analysis of Variance.

Conard, Elizabeth H.; Lutz, J. Gary

1979-01-01

A program is described which selects the most powerful among four methods for conducting a priori comparisons in an analysis of variance: orthogonal contrasts, Scheffe's method, Dunn's method, and Dunnett's test. The program supplies the critical t ratio and the per-comparison Type I error risk for each of the relevant methods. (Author/JKS)

14. Separation of base allele and sampling term effects gives new insights in variance component QTL analysis

Carlborg Örjan; Rönnegård Lars

2007-01-01

Abstract Background Variance component (VC) models are commonly used for Quantitative Trait Loci (QTL) mapping in outbred populations. Here, the QTL effect is given as a random effect and a critical part of the model is the relationship between the phenotypic values and the random effect. In the traditional VC model, each individual has a unique QTL effect and the relationship between these random effects is given as a covariance structure (known as the identity-by-descent (IBD) matrix). Resu...

15. Flood damage maps: ranking sources of uncertainty with variance-based sensitivity analysis

Saint-Geours, N.; Grelot, F.; Bailly, J.-S.; Lavergne, C.

2012-04-01

In order to increase the reliability of flood damage assessment, we need to question the uncertainty associated with the whole flood risk modeling chain. Using a case study on the basin of the Orb River, France, we demonstrate how variance-based sensitivity analysis can be used to quantify uncertainty in flood damage maps at different spatial scales and to identify the sources of uncertainty which should be reduced first. Flood risk mapping is recognized as an effective tool in flood risk management and the elaboration of flood risk maps is now required for all major river basins in the European Union (European directive 2007/60/EC). Flood risk maps can be based on the computation of the Mean Annual Damages indicator (MAD). In this approach, potential damages due to different flood events are estimated for each individual stake over the study area, then averaged over time - using the return period of each flood event - and finally mapped. The issue of uncertainty associated with these flood damage maps should be carefully scrutinized, as they are used to inform the relevant stakeholders or to design flood mitigation measures. Maps of the MAD indicator are based on the combination of hydrological, hydraulic, geographic and economic modeling efforts: as a result, numerous sources of uncertainty arise in their elaboration. Many recent studies describe these various sources of uncertainty (Koivumäki 2010, Bales 2009). Some authors propagate these uncertainties through the flood risk modeling chain and estimate confidence bounds around the resulting flood damage estimates (de Moel 2010). It would now be of great interest to go a step further and to identify which sources of uncertainty account for most of the variability in Mean Annual Damages estimates. We demonstrate the use of variance-based sensitivity analysis to rank sources of uncertainty in flood damage mapping and to quantify their influence on the accuracy of flood damage estimates. We use a quasi

16. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA)

Rakesh,; Satish Kumar

2015-01-01

The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG) of Mild Steel 1018 grade by Analysis of Variance (ANOVA). The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which c...

17. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

Alston, D. W.

1981-01-01

The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

18. CAIXA: a catalogue of AGN in the XMM-Newton archive. III. Excess variance analysis

Ponti, G.; Papadakis, I.; Bianchi, S.; Guainazzi, M.; Matt, G.; P. Uttley(Astronomical Institute Anton Pannekoek, University of Amsterdam, The Netherlands); Bonilla, N.F.

2012-01-01

Context. We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10 ks in pointed observations, which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. Aims. Recently it has been suggested that the same engine might be at work in the core of every black hole (BH) accreting object. In this hypothe...

19. Algebraic analysis approach for multibody problems 2. Variance of velocity changes

The algebraic model (ALG) proposed by the authors has sufficiently high accuracy in calculating the motion of a test particle with all the field particles at rest. When all the field particles are moving, however, the ALG has poor prediction ability on the motion of the test particle initially at rest. None the less, the ALG approximation gives a good results for the statistical quantities, such as variance of velocity changes or the scattering cross section, for a sufficiently large number of Monte Carlo trials. (author)

20. Empirical Analysis of Affine vs. Nonaffine Variance Specifications in Jump-Diffusion Models for Equity Indices

Seeger, N.J.; Rodrigues, P.J.M.; Ignatieva, K.

2015-01-01

This paper investigates several crucial issues that arise when modeling equity returns with stochastic variance. (i) Does the model need to include jumps even when using a nonaffine variance specification? We find that jump models clearly outperform pure stochastic volatility models. (ii) How do affine variance specifications perform when compared to nonaffine models in a jump diffusion setup? We find that nonaffine specifications outperform affine models, even after including jumps.

1. Variance Risk Premiums and Predictive Power of Alternative Forward Variances in the Corn Market

Zhiguang Wang; Scott W. Fausti; Qasmi, Bashir A.

2010-01-01

We propose a fear index for corn using the variance swap rate synthesized from out-of-the-money call and put options as a measure of implied variance. Previous studies estimate implied variance based on Black (1976) model or forecast variance using the GARCH models. Our implied variance approach, based on variance swap rate, is model independent. We compute the daily 60-day variance risk premiums based on the difference between the realized variance and implied variance for the period from 19...

2. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

Renard Jean-Paul

2005-06-01

Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

3. Longitudinal Analysis of Residual Feed Intake in Mink using Random Regression with Heterogeneous Residual Variance

Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik;

Heritability of residual feed intake (RFI) increased from low to high over the growing period in male and female mink. The lowest heritability for RFI (male: 0.04 ± 0.01 standard deviation (SD); female: 0.05 ± 0.01 SD) was in early and the highest heritability (male: 0.33 ± 0.02; female: 0.34 ± 0.......02 SD) was achieved at the late growth stages. The genetic correlation between different growth stages for RFI showed a high association (0.91 to 0.98) between early and late growing periods. However, phenotypic correlations were lower from 0.29 to 0.50. The residual variances were substantially higher...... at the end compared to the early growing period suggesting that heterogeneous residual variance should be considered for analyzing feed efficiency data in mink. This study suggests random regression methods are suitable for analyzing feed efficiency and that genetic selection for RFI in mink is...

4. Explaining the Variance of Price Dividend Ratios

Cochrane, John H.

1989-01-01

This paper presents a bound on the variance of the price-dividend ratio and a decomposition of the variance of the price-dividend ratio into components that reflect variation in expected future discount rates and variation in expected future dividend growth. Unobserved discount rates needed to make the variance bound and variance decomposition hold are characterized, and the variance bound and variance decomposition are tested for several discount rate models, including the consumption based ...

5. FMRI group analysis combining effect estimates and their variances

Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Michael S Beauchamp; Cox, Robert W.

2011-01-01

Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an a...

6. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

7. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

2008-01-01

Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general...

8. On variance estimate for covariate adjustment by propensity score analysis.

Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

2016-09-10

Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

9. The benefit of regional diversification of cogeneration investments in Europe: A mean-variance portfolio analysis

The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. - Research highlights: →Preconditions for CHP investments differ significantly between the EU member states. →Regional diversification of CHP investments can reduce the total portfolio risk. →Risk reduction depends on the chosen CHP technology.

10. Positive estimation of the between-group variance component in one-way anova and meta-analysis

Hartung, Joachim; Makambi, Kepher H.

2000-01-01

Positive estimators of the between-group (between-study) variance are proposed. Explicit variance formulae for the estimators are given and approximate confidence intervals for the between-group variance are constructed, as our proposal to a long outstanding problem. By Monte Carlo simulation, the bias and standard deviation of the proposed estimators are compared with the truncated versions of the maxi- mum likelihood (ML) estimator, restricted maximum likelihood (REML) estimator and a (late...

11. Comments on the statistical analysis of excess variance in the COBE differential microwave radiometer maps

Wright, E. L.; Smoot, G. F.; Kogut, A.; Hinshaw, G.; Tenorio, L.; Lineweaver, C.; Bennett, C. L.; Lubin, P. M.

1994-01-01

Cosmic anisotrophy produces an excess variance sq sigma(sub sky) in the Delta maps produced by the Differential Microwave Radiometer (DMR) on cosmic background explorer (COBE) that is over and above the instrument noise. After smoothing to an effective resolution of 10 deg, this excess sigma(sub sky)(10 deg), provides an estimate for the amplitude of the primordial density perturbation power spectrum with a cosmic uncertainty of only 12%. We employ detailed Monte Carlo techniques to express the amplitude derived from this statistic in terms of the universal root mean square (rms) quadrupole amplitude, (Q sq/RMS)(exp 0.5). The effects of monopole and dipole subtraction and the non-Gaussian shape of the DMR beam cause the derived (Q sq/RMS)(exp 0.5) to be 5%-10% larger than would be derived using simplified analytic approximations. We also investigate the properties of two other map statistics: the actual quadrupole and the Boughn-Cottingham statistic. Both the sigma(sub sky)(10 deg) statistic and the Boughn-Cottingham statistic are consistent with the (Q sq/RMS)(exp 0.5) = 17 +/- 5 micro K reported by Smoot et al. (1992) and Wright et al. (1992).

12. Spectral and chromatographic fingerprinting with analysis of variance-principal component analysis (ANOVA-PCA): a useful tool for differentiating botanicals and characterizing sources of variance

Objectives: Spectral fingerprints, acquired by direct injection (no separation) mass spectrometry (DI-MS) or liquid chromatography with UV detection (HPLC), in combination with ANOVA-PCA, were used to differentiate 15 powders of botanical materials. Materials and Methods: Powders of 15 botanical mat...

13. The Variance of Language in Different Contexts

申一宁

2012-01-01

language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.

14. SU-E-T-41: Analysis of GI Dose Variability Due to Intrafraction Setup Variance

Purpose: Proton SBRT (stereotactic body radiation therapy) can be an effective modality for treatment of gastrointestinal tumors, but limited in practice due to sensitivity with respect to variation in the RPL (radiological path length). Small, intrafractional shifts in patient anatomy can lead to significant changes in the dose distribution. This study describes a tool designed to visualize uncertainties in radiological depth in patient CT's and aid in treatment plan design. Methods: This project utilizes the Shadie toolkit, a GPU-based framework that allows for real-time interactive calculations for volume visualization. Current SBRT simulation practice consists of a serial CT acquisition for the assessment of inter- and intra-fractional motion utilizing patient specific immobilization systems. Shadie was used to visualize potential uncertainties, including RPL variance and changes in gastric content. Input for this procedure consisted of two patient CT sets, contours of the desired organ, and a pre-calculated dose. In this study, we performed rigid registrations between sets of 4DCT's obtained from a patient with varying setup conditions. Custom visualizations are written by the user in Shadie, permitting one to create color-coded displays derived from a calculation along each ray. Results: Serial CT data acquired on subsequent days was analyzed for variation in RPB and gastric content. Specific shaders were created to visualize clinically relevant features, including RPL (radiological path length) integrated up to organs of interest. Using pre-calculated dose distributions and utilizing segmentation masks as additional input allowed us to further refine the display output from Shadie and create tools suitable for clinical usage. Conclusion: We have demonstrated a method to visualize potential uncertainty for intrafractional proton radiotherapy. We believe this software could prove a useful tool to guide those looking to design treatment plans least

15. MNEs and Industrial Structure in Host Countries:A Mean Variance Analysis of Ireland’s Manufacturing Sector

Colm Kearney; Frank Barry

2005-01-01

We use mean-variance analysis to demonstrate the importance of a hitherto neglected benefit of enticing MNEs to locate in small and medium-sized countries. During the 25 years from 1974 to 1999, over 1000 foreign MNEs have located in Ireland, and they have raised their share of all manufacturing jobs in the country from one-third to one-half. The foreign MNEs tend to operate in high-technology sectors, and they grow faster with greater volatility than the traditional low-technology indigenous...

16. An analysis of the factors generating the variance between the budgeted and actual operating results of the Naval Aviation Depot at North Island, California

Curran, Thomas; Schimpff, Joshua J.

2008-01-01

For six of the past eight years, naval aviation depot-level maintenances activities have encountered operating losses that were not anticipated in the Navy Working Capital Fund (NWCF) budgets. These unanticipated losses resulted in increases or surcharges to the stabilized rates as an offset. This project conducts a variance analysis to uncover possible causes of the unanticipated losses. The variance analysis between budgeted (projected) and actual financial results was performed on fina...

17. Regression Computer Programs for Setwise Regression and Three Related Analysis of Variance Techniques.

Williams, John D.; Lindem, Alfred C.

Four computer programs using the general purpose multiple linear regression program have been developed. Setwise regression analysis is a stepwise procedure for sets of variables; there will be as many steps as there are sets. Covarmlt allows a solution to the analysis of covariance design with multiple covariates. A third program has three…

18. 使用SPSS软件进行多因素方差分析%Application of SPSS Software in Multivariate Analysis of Variance

龚江; 石培春; 李春燕

2012-01-01

以两因素完全随机有重复的试验为例,阐述用SPSS软进行方差分析的详细过程,包括数据的输入、变异来源的分析,方差分析结果,以及显著性检验,最后还对方差分析注意事项进行分析,为科技工作者使用SPSS软进方差分析提供参考。%An example about two factors multiple completely random design analysis of variance was given and the detailed process of analysis of variance in SPSS software was elaborated,including the data input,he source analysis of the variance,the result of analysis of variance,the test of significance,etc.At last,precautions on the analysis of variance with SPSS software were given,providing references to the analysis of variance with SPSS software for scientific research workers.

19. MEASURING DRIVERSâ EFFECT IN A COST MODEL BY MEANS OF ANALYSIS OF VARIANCE

Maria Elena Nenni

2013-01-01

Full Text Available In this study the author goes through with the analysis of a cost model developed for Integrated Logistic Support (ILS activities. By means of ANOVA the evaluation of impact and interaction among cost drivers is done. The predominant importance of organizational factors compared to technical ones is definitely demonstrated. Moreover the paper provides researcher and practitioners with useful information to improve the cost model as well as for budgeting and financial planning of ILS activities.

20. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA

Rakesh

2015-11-01

Full Text Available The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG of Mild Steel 1018 grade by Analysis of Variance (ANOVA. The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which consist three input parameters. Root gap has greatest effect on tensile strength followed by welding current and arc voltage. Arc voltage has greatest effect on hardness followed by root gap and welding current. Weld metal consists of fine grains of ferrite and pearlite.

1. Cyclostationary analysis with logarithmic variance stabilisation

Borghesani, Pietro; Shahriar, Md Rifat

2016-03-01

Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

2. Variance associated with subject velocity and trial repetition during force platform gait analysis in a heterogeneous population of clinically normal dogs.

Hans, Eric C; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter

2014-12-01

Factors that contribute to variance in ground reaction forces (GRF) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, it was hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from 20 normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5 m/s(2)). Variance effects from 12 trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P trotting dogs. This concept is important for clinical trial design. PMID:25457264

3. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

Greve, Douglas N; Svarer, Claus; Fisher, Patrick M;

2014-01-01

Exploratory (i.e., voxelwise) spatial methods are commonly used in neuroimaging to identify areas that show an effect when a region-of-interest (ROI) analysis cannot be performed because no strong a priori anatomical hypothesis exists. However, noise at a single voxel is much higher than noise in a...... ROI making noise management critical to successful exploratory analysis. This work explores how preprocessing choices affect the bias and variability of voxelwise kinetic modeling analysis of brain positron emission tomography (PET) data. These choices include the use of volume- or cortical surface......-based smoothing, level of smoothing, use of voxelwise partial volume correction (PVC), and PVC masking threshold. PVC was implemented using the Muller-Gartner method with the masking out of voxels with low gray matter (GM) partial volume fraction. Dynamic PET scans of an antagonist serotonin-4 receptor...

4. Analysis of NDVI variance across landscapes and seasons allows assessment of degradation and resilience to shocks in Mediterranean dry ecosystems

liniger, hanspeter; jucker riva, matteo; schwilch, gudrun

2016-04-01

Mapping and assessment of desertification is a primary basis for effective management of dryland ecosystems. Vegetation cover and biomass density are key elements for the ecological functioning of dry ecosystem, and at the same time an effective indicator of desertification, land degradation and sustainable land management. The Normalized Difference Vegetation Index (NDVI) is widely used to estimate the vegetation density and cover. However, the reflectance of vegetation and thus the NDVI values are influenced by several factors such as type of canopy, type of land use and seasonality. For example low NDVI values could be associated to a degraded forest, to a healthy forest under dry climatic condition, to an area used as pasture, or to an area managed to reduce the fuel load. We propose a simple method to analyse the variance of NDVI signal considering the main factors that shape the vegetation. This variance analysis enables us to detect and categorize degradation in a much more precise way than simple NDVI analysis. The methodology comprises identifying homogeneous landscape areas in terms of aspect, slope, land use and disturbance regime (if relevant). Secondly, the NDVI is calculated from Landsat multispectral images and the vegetation potential for each landscape is determined based on the percentile (highest 10% value). Thirdly, the difference between the NDVI value of each pixel and the potential is used to establish degradation categories . Through this methodology, we are able to identify realistic objectives for restoration, allowing a targeted choice of management options for degraded areas. For example, afforestation would only be done in areas that show potential for forest growth. Moreover, we can measure the effectiveness of management practices in terms of vegetation growth across different landscapes and conditions. Additionally, the same methodology can be applied to a time series of multispectral images, allowing detection and quantification of

5. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

VÎLCU Adrian; HRISTIAN Liliana; BORDEIANU Demetra Lăcrămioara

2014-01-01

The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA) can be employed. ANCOVA shows the corre...

6. Models with Time-varying Mean and Variance: A Robust Analysis of U.S. Industrial Production

Charles S. Bos; Koopman, Siem Jan

2010-01-01

Many seasonal macroeconomic time series are subject to changes in their means and variances over a long time horizon. In this paper we propose a general treatment for the modelling of time-varying features in economic time series. We show that time series models with mean and variance functions depending on dynamic stochastic processes can be sufficiently robust against changes in their dynamic properties. We further show that the implementation of the treatment is relatively straightforward....

7. Discrete and continuous time dynamic mean-variance analysis

Reiss, Ariane

1999-01-01

Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

8. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

Boccard, Julien; Rudaz, Serge

2016-05-12

Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

9. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

2008-06-01

Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

10. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

2008-01-01

Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

11. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

VÎLCU Adrian

2014-05-01

Full Text Available The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA can be employed. ANCOVA shows the correlation between a dependent variable and the covariate independent variables and removes the variability from the dependent variable that can be accounted by the covariates. The independent and dependent variable structures for Multiple Regression, factorial ANOVA and ANCOVA tests are similar. ANCOVA is differentiated from the other two in that it is used when the researcher wants to neutralize the effect of a continuous independent variable in the experiment. The researcher may simply not be interested in the effect of a given independent variable when performing a study. Another situation where ANCOVA should be applied is when an independent variable has a strong correlation with the dependent variable, but does not interact with other independent variables in predicting the dependent variable’s value. ANCOVA is used to neutralize the effect of the more powerful, non-interacting variable. Without this intervention measure, the effects of interacting independent variables can be clouded

12. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 104 to 106 times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of

13. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

2016-01-01

Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

14. Budget Variance Analysis of a Departmentwide Implementation of a PACS at a Major Academic Medical Center

Reddy, Arra Suresh; Loh, Shaun; Kane, Robert A.

2006-01-01

In this study, the costs and cost savings associated with departmentwide implementation of a picture archiving and communication system (PACS) as compared to the projected budget at the time of inception were evaluated. An average of \$214,460 was saved each year with a total savings of \$1,072,300 from 1999 to 2003, which is significantly less than the \$2,943,750 projected savings. This discrepancy can be attributed to four different factors: (1) overexpenditures, (2) insufficient cost savings...

15. The Benefit of Regional Diversification of Cogeneration Investments in Europe: A Mean-Variance Portfolio Analysis

Westner, Günther; Madlener, Reinhard

2009-01-01

The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standar...

16. Budget variance analysis of a departmentwide implementation of a PACS at a major academic medical center.

Reddy, Arra Suresh; Loh, Shaun; Kane, Robert A

2006-01-01

In this study, the costs and cost savings associated with departmentwide implementation of a picture archiving and communication system (PACS) as compared to the projected budget at the time of inception were evaluated. An average of \$214,460 was saved each year with a total savings of \$1,072,300 from 1999 to 2003, which is significantly less than the \$2,943,750 projected savings. This discrepancy can be attributed to four different factors: (1) overexpenditures, (2) insufficient cost savings, (3) unanticipated costs, and (4) project management issues. Although the implementation of PACS leads to cost savings, actual savings will be much lower than expected unless extraordinary care is taken when devising the budget. PMID:16946989

17. Regional sensitivity analysis using revised mean and variance ratio functions

The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

18. Two-dimensional finite-element temperature variance analysis

Heuser, J. S.

1972-01-01

The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

19. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

20. Variance associated with the use of relative velocity for force platform gait analysis in a heterogeneous population of clinically normal dogs.

Volstad, Nicola; Nemke, Brett; Muir, Peter

2016-01-01

Factors that contribute to variance in ground reaction forces (GRFs) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population, it may be preferable to minimize data variance and efficiently perform force platform gait analysis by evaluation of each individual dog at its preferred velocity, such that dogs are studied at a similar relative velocity (V*). Data from 27 normal dogs were obtained including withers and shoulder height. Each dog was trotted across a force platform at its preferred velocity, with controlled acceleration (±0.5 m/s(2)). V* ranges were created for withers and shoulder height. Variance effects from 12 trotting velocity ranges and associated V* ranges were examined using repeated-measures analysis-of-covariance. Mean bodyweight was 24.4 ± 7.4 kg. Individual dog, velocity, and V* significantly influenced GRF (P <0.001). Trial number significantly influenced thoracic limb peak vertical force (PVF) (P <0.001). Limb effects were not significant. The magnitude of variance effects was greatest for the dog effect. Withers height V* was associated with small GRF variance. Narrow velocity ranges typically captured a smaller percentage of trials and were not consistently associated with lower variance. The withers height V* range of 0.6-1.05 captured the largest proportion of trials (95.9 ± 5.9%) with no significant effects on PVF and vertical impulse. The use of individual velocity ranges derived from a withers height V* range of 0.6-1.05 will account for population heterogeneity while minimizing exacerbation of lameness in clinical trials studying lame dogs by efficient capture of valid trials. PMID:26631945

1. Common, Specific, and Error Variance Components of Factor Models

Raffalovich, Lawrence E.; George W. Bohrnstedt

1987-01-01

In the classic factor-analysis model, the total variance of an item is decomposed into common, specific, and random error components. Since with cross-sectional data it is not possible to estimate the specific variance component, specific and random error variance are summed to the item's uniqueness. This procedure imposes a downward bias to item reliability estimates, however, and results in correlated item uniqueness in longitudinal models. In this article, we describe a method for estimati...

2. Aspects of First Year Statistics Students' Reasoning When Performing Intuitive Analysis of Variance: Effects of Within- and Between-Group Variability

Trumpower, David L.

2015-01-01

Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…

3. Analysis of Variance in Vocabulary Learning Strategies Theory and Practice: A Case Study in Libya

Salma H M Khalifa

2016-06-01

Full Text Available The present study is an outcome of a concern for the teaching of English as a foreign language (EFL in Libyan schools. Learning of a foreign language is invariably linked to learners building a good repertoire of vocabulary of the target language, which takes us to the theory and practice of imparting training in vocabulary learning strategies (VLSs to learners. The researcher observed that there exists a divergence in theoretical knowledge of VLSs and practically training learners in using the strategies in EFL classes in Libyan schools. To empirically examine the situation, a survey was conducted with secondary school English teachers. The study discusses the results of the survey. The results show that teachers of English in secondary school in Libya are either not aware of various vocabulary learning strategies, or if they are, they do not impart training in all VLSs as they do not realize that to achieve good results in language learning, a judicious use of all VLSs is required. Though the study was conducted on a small scale, the results are highly encouraging.Keywords: vocabulary learning strategies, vocabulary learning theory, teaching of vocabulary learning strategies

4. A variance analysis of the capacity displaced by wind energy in Europe

Giebel, Gregor

2007-01-01

Wind energy generation distributed all over Europe is less variable than generation from a single region. To analyse the benefits of distributed generation, the whole electrical generation system of Europe has been modelled including varying penetrations of wind power. The model is chronologically...... detail into a longer-term context. The results are that wind energy can contribute more than 20% of the European demand without significant changes in the system and can replace conventional capacity worth about 10% of the installed wind power capacity. The long-term reference shows that the analysed...... simulating the scheduling of the European power plants to cover the demand at every hour of the year. The wind power generation was modelled using wind speed measurements from 60 meteorological stations, for 1 year. The distributed wind power also displaces fossil-fuelled capacity. However, every assessment...

5. Model selection and analysis tools in response surface modeling of the process mean and variance

Griffiths, Kristi L.

1995-01-01

Product improvement is a serious issue facing industry today. And while response surface methods have been developed which address the process mean involved in improving the product there has been little research done on the process variability. Lack of quality in a product can be attributed to its inconsistency in performance thereby highlighting the need for a methodology which addresses process variability. The key to working with the process variability comes in the hand...

6. Influence of Family Structure on Variance Decomposition

Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...... genetic variance. However, in Holstein cattle, a group of genes that explained close to none of the genetic variance could also have a high likelihood ratio. This is still a good separation of signal and noise, but instead of capturing the genetic signal in the marker set being tested, we are instead...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

7. FMRI group analysis combining effect estimates and their variances.

Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W

2012-03-01

Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

8. A two-dimensional analysis of variance for the simultaneous quantitative determination of estradiol and its 4-14C isotopically labelled analogue with selected ion monitoring

Estradiol-17β and [4-14C]estradiol-17β in total mass amounts of 0, 10 and 20 pg per analysis in the presence of 1000 pg[2H8]estradiol as an internal standard were simultaneously quantitatively determined by gas chromatography mass spectrometry using the selected ion monitoring technique. The experiment was designed to investigate the two-dimensional error structure which facilitates the study and comparison of the variances and showed that in this concentration range the variance due to sample preparation is smaller than that due to the device. The deviations due to the device were shown to have a two-dimensional normal distribution. (author)

9. Simultaneous optimal estimates of fixed effects and variance components in the mixed model

WU; Mixia; WANG; Songgui

2004-01-01

For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.

10. Power generation mixes evaluation applying the mean-variance theory. Analysis of the choices for Japanese energy policy

Optimal Japanese power generation mixes in 2030, for both economic efficiency and energy security (less cost variance risk), are evaluated by applying the mean-variance portfolio theory. Technical assumptions, including remaining generation capacity out of the present generation mix, future load duration curve, and Research and Development risks for some renewable energy technologies in 2030, are taken into consideration as either the constraints or parameters for the evaluation. Efficiency frontiers, which consist of the optimal generation mixes for several future scenarios, are identified, taking not only power balance but also capacity balance into account, and are compared with three power generation mixes submitted by the Japanese government as 'the choices for energy and environment'. (author)

11. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

Menard, Richard; Chang, Lang-Ping

1998-01-01

A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the

12. Variance decomposition of apolipoproteins and lipids in Danish twins

Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A; Kyvik, Kirsten O

2007-01-01

been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect......OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...

13. Electrocardiogram signal variance analysis in the diagnosis of coronary artery disease--a comparison with exercise stress test in an angiographically documented high prevalence population.

Nowak, J; Hagerman, I; Ylén, M; Nyquist, O; Sylvén, C

1993-09-01

Variance electrocardiography (variance ECG) is a new resting procedure for detection of coronary artery disease (CAD). The method measures variability in the electrical expression of the depolarization phase induced by this disease. The time-domain analysis is performed on 220 cardiac cycles using high-fidelity ECG signals from 24 leads, and the phase-locked temporal electrical heterogeneity is expressed as a nondimensional CAD index (CAD-I) with the values of 0-150. This study compares the diagnostic efficiency of variance ECG and exercise stress test in a high prevalence population. A total of 199 symptomatic patients evaluated with coronary angiography was subjected to variance ECG and exercise test on a bicycle ergometer as a continuous ramp. The discriminant accuracy of the two methods was assessed employing the receiver operating characteristic curves constructed by successive consideration of several CAD-I cutpoint values and various threshold criteria based on ST-segment depression exclusively or in combination with exertional chest pain. Of these patients, 175 with CAD (> or = 50% luminal stenosis in 1 + major epicardial arteries) presented a mean CAD-I of 88 +/- 22, compared with 70 +/- 21 in 24 nonaffected patients (p or = 70, compared with ST-segment depression > or = 1 mm combined with exertional chest pain, the overall sensitivity of variance ECG was significantly higher (p < 0.01) than that of exercise test (79 vs. 48%). When combined, the two methods identified 93% of coronary angiography positive cases. Variance ECG is an efficient diagnostic method which compares favorably with exercise test for detection of CAD in high prevalence population. PMID:8242912

14. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods

15. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa

2014-07-01

We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

16. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

Kitzing, Lena

2014-01-01

Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean-variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums....... Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...... attractiveness for investment, because they expose investors to less market risk. These risk implications should be considered when designing policy schemes....

17. The variance of the adjusted Rand index.

Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence

2016-06-01

For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693

18. An effective approximation for variance-based global sensitivity analysis

The paper presents a fairly efficient approximation for the computation of variance-based sensitivity measures associated with a general, n-dimensional function of random variables. The proposed approach is based on a multiplicative version of the dimensional reduction method (M-DRM), in which a given complex function is approximated by a product of low dimensional functions. Together with the Gaussian quadrature, the use of M-DRM significantly reduces the computation effort associated with global sensitivity analysis. An important and practical benefit of the M-DRM is the algebraic simplicity and closed-form nature of sensitivity coefficient formulas. Several examples are presented to show that the M-DRM method is as accurate as results obtained from simulations and other approximations reported in the literature

19. 42 CFR 456.522 - Content of request for variance.

2010-10-01

... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.522 Content of request for variance. The agency's request for a variance must include—...

20. Inhomogeneity-induced variance of cosmological parameters

Wiegand, Alexander

2011-01-01

Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. So, how can local measurements (at the 100 Mpc scale) be used to determine global cosmological parameters (defined at the 10 Gpc scale)? We use Buchert's averaging formalism and determine a set of locally averaged cosmological parameters in the context of the flat Lambda cold dark matter model. We calculate their ensemble means (i.e. their global values) and variances (i.e. their cosmic variances). We apply our results to typical survey geometries and focus on the study of the effects of local fluctuations of the curvature parameter. By this means we show, that in the linear regime cosmological backreaction and averaging can be reformulated as the issue of cosmic variance. The cosmic variance is found largest for the curvature parameter and discuss some of its consequences. We further propose to use the observed variance of cosmological parameters t...

1. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean–variance approach

Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean–variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums. Using cash flow analysis, Monte Carlo simulations and mean–variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feed-in tariffs systematically require lower direct support levels than feed-in premiums while providing the same attractiveness for investment, because they expose investors to less market risk. These risk implications should be considered when designing policy schemes. - Highlights: • Mean–variance portfolio approach to analyse risk implications of policy instruments. • We show that feed-in tariffs require lower support levels than feed-in premiums. • This systematic effect stems from the lower exposure of investors to market risk. • We created a stochastic model for an exemplary offshore wind park in West Denmark. • We quantify risk-return, Sharpe Ratios and differences in required support levels

2. A Monte Carlo Study of Seven Homogeneity of Variance Tests

Howard B. Lee

2010-01-01

Full Text Available Problem statement: The decision by SPSS (now PASW to use the unmodified Levene test to test homogeneity of variance was questioned. It was compared to six other tests. In total, seven homogeneity of variance tests used in Analysis Of Variance (ANOVA were compared on robustness and power using Monte Carlo studies. The homogeneity of variance tests were (1 Levene, (2 modified Levene, (3 Z-variance, (4 Overall-Woodward Modified Z-variance, (5 OBrien, (6 Samiuddin Cube Root and (7 F-Max. Approach: Each test was subjected to Monte Carlo analysis through different shaped distributions: (1 normal, (2 platykurtic, (3 leptokurtic, (4 moderate skewed and (5 highly skewed. The Levene Test is the one used in all of the latest versions of SPSS. Results: The results from these studies showed that the Levene Test is neither the best nor worst in terms of robustness and power. However, the modified Levene Test showed very good robustness when compared to the other tests but lower power than other tests. The Samiuddin test is at its best in terms of robustness and power when the distribution is normal. The results of this study showed the strengths and weaknesses of the seven tests. Conclusion/Recommendations: No single test outperformed the others in terms of robustness and power. The authors recommend that kurtosis and skewness indices be presented in statistical computer program packages such as SPSS to guide the data analyst in choosing which test would provide the highest robustness and power.

3. Genomic prediction of breeding values using previously estimated SNP variances

Calus, M.P.L.; Schrooten, C.; Veerkamp, R.F.

2014-01-01

Background Genomic prediction requires estimation of variances of effects of single nucleotide polymorphisms (SNPs), which is computationally demanding, and uses these variances for prediction. We have developed models with separate estimation of SNP variances, which can be applied infrequently, and

4. Inhomogeneity-induced variance of cosmological parameters

Wiegand, A.; Schwarz, D. J.

2012-02-01

Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.

5. Decomposition of variance for spatial Cox processes

Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

2013-01-01

Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...... additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

6. The relation of the Allan- and Delta-variance to the continuous wavelet transform

Zielinsky, M.; Stutzki, J.

1999-01-01

This paper is understood as a supplement to the paper by [Stutzki et al, 1998], where we have shown the usefulness of the Allan-variance and its higher dimensional generalization, the Delta-variance, for the characterization of molecular cloud structures. In this study we present the connection between the Allan- and Delta-variance and a more popular structure analysis tool: the wavelet transform. We show that the Allan- and Delta-variances are the variances of wavelet transform coefficients.

7. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

Eric A Zilli

2009-11-01

Full Text Available Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3/(4pisigma(2 seconds where mu is the mean period of an oscillator in seconds and sigma(2 its variance in seconds(2. We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

8. ROBUST ESTIMATION OF VARIANCE COMPONENTS MODEL

1999-01-01

Classical least squares estimation consists of minimizing the sum of the squared residuals of observation. Many authors have produced more robust versions of this estimation by replacing the square by something else, such as the absolute value. These approaches have been generalized, and their robust estimations and influence functions of variance components have been presented. The results may have wide practical and theoretical value.

9. LOCAL MEDIAN ESTIMATION OF VARIANCE FUNCTION

杨瑛

2004-01-01

This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong convergence rates of the proposed estimators are obtained. Simulation results are given to show the performance of the proposed methods.

10. Lorenz Dominance and the Variance of Logarithms.

Ok, Efe A.; Foster, James

1997-01-01

The variance of logarithms is a widely used inequality measure which is well known to disagree with the Lorenz criterion. Up to now, the extent and likelihood of this inconsistency were thought to be vanishingly small. We find that this view is mistaken : the extent of the disgreement can be extremely large; the likelihood is far from negligible.

11. Linear transformations of variance/covariance matrices

Parois, P.J.A.; Lutz, M.

2011-01-01

Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance

12. Decomposition of variance for spatial Cox processes

Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

13. Decomposition of variance for spatial Cox processes

Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...

14. A multi-variance analysis in the time domain

Walter, Todd

1993-01-01

Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.

15. Longitudinal analysis of residual feed intake and BW in mink using random regression with heterogeneous residual variance.

Shirali, M; Nielsen, V H; Møller, S H; Jensen, J

2015-10-01

The aim of this study was to determine the genetic background of longitudinal residual feed intake (RFI) and BW gain in farmed mink using random regression methods considering heterogeneous residual variances. The individual BW was measured every 3 weeks from 63 to 210 days of age for 2139 male+female pairs of juvenile mink during the growing-furring period. Cumulative feed intake was calculated six times with 3-week intervals based on daily feed consumption between weighing's from 105 to 210 days of age. Genetic parameters for RFI and BW gain in males and females were obtained using univariate random regression with Legendre polynomials containing an animal genetic effect and permanent environmental effect of litter along with heterogeneous residual variances. Heritability estimates for RFI increased with age from 0.18 (0.03, posterior standard deviation (PSD)) at 105 days of age to 0.49 (0.03, PSD) and 0.46 (0.03, PSD) at 210 days of age in male and female mink, respectively. The heritability estimates for BW gain increased with age and had moderate to high range for males (0.33 (0.02, PSD) to 0.84 (0.02, PSD)) and females (0.35 (0.03, PSD) to 0.85 (0.02, PSD)). RFI estimates during the growing period (105 to 126 days of age) showed high positive genetic correlations with the pelting RFI (210 days of age) in male (0.86 to 0.97) and female (0.92 to 0.98). However, phenotypic correlations were lower from 0.47 to 0.76 in males and 0.61 to 0.75 in females. Furthermore, BW records in the growing period (63 to 126 days of age) had moderate (male: 0.39, female: 0.53) to high (male: 0.87, female: 0.94) genetic correlations with pelting BW (210 days of age). The result of current study showed that RFI and BW in mink are highly heritable, especially at the late furring period, suggesting potential for large genetic gains for these traits. The genetic correlations suggested that substantial genetic gain can be obtained by only considering the RFI estimate and BW at pelting

16. Deriving dispersional and scaled windowed variance analyses using the correlation function of discrete fractional Gaussian noise

RAYMOND, GARY M.; Bassingthwaighte, James B.

1999-01-01

Methods for estimating the fractal dimension, D, or the related Hurst coefficient, H, for a one-dimensional fractal series include Hurst’s method of rescaled range analysis, spectral analysis, dispersional analysis, and scaled windowed variance analysis (which is related to detrended fluctuation analysis). Dispersional analysis estimates H by using the variance of the grouped means of discrete fractional Gaussian noise series (DfGn). Scaled windowed variance analysis estimates H using the mea...

17. The Theory of Variances in Equilibrium Reconstruction

Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

2008-01-14

The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

18. The Theory of Variances in Equilibrium Reconstruction

The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature

19. Impact of Damping Uncertainty on SEA Model Response Variance

Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

2010-01-01

Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

20. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

Parolini, Giuditta

2015-01-01

During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906

1. 42 CFR 456.525 - Request for renewal of variance.

2010-10-01

... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...

2. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

Shanshan Gu; Jianye Liu; Qinghua Zeng; Shaojun Feng; Pin Lv

2015-01-01

To solve the problem that dynamic Allan variance (DAVAR) with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG) signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then th...

3. Use of statistical methods of variance analysis in the qualification of the suppliers of nuclear fuel components

The purpose of quality assurance of the suppliers of materials used in the fabrication of nuclear fuel is to determine, as accurately as possible, the capacity of a supplier to provide a product meeting certain pre-specified requirements. The outcome of an assessment obviously depends on the extent to which the requirements are met. If the matter rests there, however, part of the information available remains unused, for the relative influence of the various factors capable of affecting fabrication quality is not considered. This kind of problem can be dealt with effectively by statistical experiment planning. After briefly recapitulating the definitions and basic principles governing the statistical use of experiment plans, the author gives two examples taken from the fabrication of PWR fuels: (1) A new UO2 pellet fabrication operation involves certain peculiarities whose influence on the final quality of the product needs to be known. The problems raised by the use of shaft-type batch-furnaces are examined by organizing experiment plan tests. The conclusions drawn from the statistical analysis reveal certain sensitive points in the fabrication operation (especially furnace homogeneity). Special precautions can be taken on the basis of these conclusions. (2) Fuel assembly frames are made by an automatic seaming machine. The assessment of this equipment involves a systematic study of expansion dimensions by another type of experiment plan. The results of the statistical analysis show that it is the seaming tools which affect the dimensions most. The most critical tools are identified. Special inspection measures can be taken to check their wear. The author shows the importance of this type of study, which, going beyond mere checking for conformity, provides a better knowledge of and enables one to specify the critical fabrication factors. It is obvious that, with such better knowledge, one can guarantee the final quality of the product more effectively. (author)

4. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

Larrinaga, Asier R.

2010-01-01

I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

5. Variance analysis and linear contracts in agencies with distorted performance measures

Budde, Jörg

2007-01-01

This paper investigates the role of variance analysis procedures in aligning objectives under the condition of distorted performance measurement. A riskneutral agency with linear contracts is analyzed, whereby the agent receives postcontract, pre-decision information on his productivity. If the performance measure is informative with respect to the agentâ€™s marginal product concerning the principalâ€™s objective, variance investigation can alleviate effort misallocation. These results carry ...

6. The derivative based variance sensitivity analysis for the distribution parameters and its computation

The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

7. Using a variance-based sensitivity analysis for analyzing the relation between measurements and unknown parameters of a physical model

Zhao, J.; Tiede, C.

2011-05-01

An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.

8. Pricing perpetual American options under multiscale stochastic elasticity of variance

Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

9. Variance reduction methods for simulation of densities on Wiener space

Kohatsu, Arturo; Pettersson, Roger

2002-01-01

We develop a general error analysis framework for the Monte Carlo simulation of densities for functionals in Wiener space. We also study variance reduction methods with the help of Malliavin derivatives. For this, we give some general heuristic principles which are applied to diffusion processes. A comparison with kernel density estimates is made.

10. Age and Gender Differences Associated with Family Communication and Materialism among Young Urban Adult Consumers in Malaysia: A One-Way Analysis of Variance (ANOVA

Eric V. Bindah

2012-11-01

Full Text Available The main purpose of this study is to examine the differences in age and gender among the various types of family communication patterns that takes place at home among young adult consumers. It is also an attempt to examine if there are differences in age and gender on the development of materialistic values in Malaysia. This paper briefly conceptualizes the family communication processes based on existing literature to illustrate the association between family communication patterns and materialism. This study takes place in Malaysia, a country in the Southeast Asia embracing a multi-ethnic and multi-cultural society. Preliminary statistical procedures were employed to examine possible significant group differences in family communication and materialism based on various age group and gender among Malaysian consumers. A one-way analysis of variance was utilised to determine the significant differences in terms of age and gender with respect to their responses on the various measures. When there were significant differences, Post Hoc Tests (Scheffe were used to determine the particular groups which differed significantly within a significant overall one-way analysis of variance. The implications, significance and limitations of the study are discussed as a concluding remark.

11. The value of travel time variance

Fosgerau, Mogens; Engelson, Leonid

2010-01-01

This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

12. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

13. Assessment of heterogeneity of residual variances using changepoint techniques

Toro Miguel A

2000-07-01

Full Text Available Abstract Several studies using test-day models show clear heterogeneity of residual variance along lactation. A changepoint technique to account for this heterogeneity is proposed. The data set included 100 744 test-day records of 10 869 Holstein-Friesian cows from northern Spain. A three-stage hierarchical model using the Wood lactation function was employed. Two unknown changepoints at times T1 and T2, (0 T1 T2 tmax, with continuity of residual variance at these points, were assumed. Also, a nonlinear relationship between residual variance and the number of days of milking t was postulated. The residual variance at a time t( in the lactation phase i was modeled as: for (i = 1, 2, 3, where λι is a phase-specific parameter. A Bayesian analysis using Gibbs sampling and the Metropolis-Hastings algorithm for marginalization was implemented. After a burn-in of 20 000 iterations, 40 000 samples were drawn to estimate posterior features. The posterior modes of T1, T2, λ1, λ2, λ3, , , were 53.2 and 248.2 days; 0.575, -0.406, 0.797 and 0.702, 34.63 and 0.0455 kg2, respectively. The residual variance predicted using these point estimates were 2.64, 6.88, 3.59 and 4.35 kg2 at days of milking 10, 53, 248 and 305, respectively. This technique requires less restrictive assumptions and the model has fewer parameters than other methods proposed to account for the heterogeneity of residual variance during lactation.

14. Further results on variances of local stereological estimators

Pawlas, Zbynek; Jensen, Eva B. Vedel

2006-01-01

In the present paper the statistical properties of local stereological estimators of particle volume are studied. It is shown that the variance of the estimators can be decomposed into the variance due to the local stereological estimation procedure and the variance due to the variability in the...... particle population. It turns out that these two variance components can be estimated separately, from sectional data. We present further results on the variances that can be used to determine the variance by numerical integration for particular choices of particle shapes....

15. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

2016-04-19

Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level

16. Decomposition of variance in terms of conditional means

Alessandro Figà Talamanca

2013-05-01

Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

17. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

2009-08-15

It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

18. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

19. Technical note: An improved estimate of uncertainty for source contribution from effective variance Chemical Mass Balance (EV-CMB) analysis

Shi, Guo-Liang; Zhou, Xiao-Yu; Feng, Yin-Chang; Tian, Ying-Ze; Liu, Gui-Rong; Zheng, Mei; Zhou, Yang; Zhang, Yuan-Hang

2015-01-01

The CMB (Chemical Mass Balance) 8.2 model released by the USEPA is a commonly used receptor model that can determine estimated source contributions and their uncertainties (called default uncertainty). In this study, we propose an improved CMB uncertainty for the modeled contributions (called EV-LS uncertainty) by adding the difference between the modeled and measured values for ambient species concentrations to the default CMB uncertainty, based on the effective variance least squares (EV-LS) solution. This correction reconciles the uncertainty estimates for EV and OLS regression. To verify the formula for the EV-LS CMB uncertainty, the same ambient datasets were analyzed using the equation we developed for EV-LS CMB uncertainty and a standard statistical package, SPSS 16.0. The same results were obtained by both ways indicate that the equation for EV-LS CMB uncertainty proposed here is acceptable. In addition, four ambient datasets were studies by CMB 8.2 and the source contributions as well as the associated uncertainties were obtained accordingly.

20. Realized range-based estimation of integrated variance

Christensen, Kim; Podolskij, Mark

We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available...... variance. Our findings suggest that the empirical path of quadratic variation is also estimated better with the realized range-based variance....

1. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

2010-04-01

... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...

2. The use of analysis of variance to evaluate the influence of two factors - clay and radionuclide- in the sorption coefficients of Freundlich

A large number of waste radioactive disposal operators use engineered barrier system in near surface and deep repository for the protection of humans and of the environmental from the potential hazards associated with this kind of waste. Clays are often considered as buffer and backfilled materials in the multi barrier concept of both high-level and low/intermediate-level radioactive waste repository. Several studies showed that this material present high sorption and exchange cationic capacity, but is important to evaluate if the sorption coefficients is influenced by kind of clay and/or by kind of radionuclide. Therefore, the objective of this research was to evaluate if this influence exist, considering clay and radionuclide like two factors and the sorption coefficients like response, determined by Freundlich Model, through of application of a statistical analysis known like Analysis of Variance for a two-factor model with one observation per cell. In this design of this experiment were analyzed four different clays (two bentonites, one kaolinite and one vermiculite) and two radionuclides (cesium and strontium). The statistical test for nonadditivity showed that there is no evidence of interaction between this two factors and only the kind of clay have a significant effect in the sorption coefficient. (author)

3. 40 CFR 142.43 - Disposition of a variance request.

2010-07-01

... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Disposition of a variance request. 142... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.43 Disposition of a variance request. (a) If...

4. 40 CFR 142.42 - Consideration of a variance request.

2010-07-01

... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Consideration of a variance request... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.42 Consideration of a variance request. (a)...

5. 31 CFR 8.59 - Proof; variance; amendment of pleadings.

2010-07-01

... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...

6. The value of travel time variance

Fosgerau, Mogens; Engelson, Leonid

2011-01-01

This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that...... does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending on...... parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

7. 31 CFR 10.67 - Proof; variance; amendment of pleadings.

2010-07-01

... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...

8. The Effect of Selection on the Phenotypic Variance

Shnol, E.E.; Kondrashov, A S

1993-01-01

We consider the within-generation changes of phenotypic variance caused by selection w(x) which acts on a quantitative trait x. If before selection the trait has Gaussian distribution, its variance decreases if the second derivative of the logarithm of w(x) is negative for all x, while if it is positive for all x, the variance increases.

9. Semiparametric bounds of mean and variance for exotic options

2009-01-01

Finding semiparametric bounds for option prices is a widely studied pricing technique.We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic(Collar and Gap) call options given mean and variance information on the underlying asset price.Mathematically,we extended domination technique by quadratic functions to bound mean and variances.

10. Semiparametric bounds of mean and variance for exotic options

LIU GuoQing; LI V.Wenbo

2009-01-01

Finding semiparametric bounds for option prices is a widely studied pricing technique. We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic (Collar and Gap) call options given mean and variance information on the underlying asset price. Mathematically, we extended domination technique by quadratic functions to bound mean and variances.

11. A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del

Hou Ying-li; Liu Guo-xin; Jiang Chun-lan

2015-01-01

In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the eﬃcient strategies and eﬃcient frontier) are derived explicitly.

12. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

Abrahamsen, Trine Julie; Hansen, Lars Kai

2011-01-01

Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in k......PCA. As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

13. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

2015-12-01

Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

14. Empirical Performance of the Constant Elasticity Variance Option Pricing Model

Ren-Raw Chen; Cheng-Few Lee; Han-Hsing Lee

2009-01-01

In this essay, we empirically test the Constant–Elasticity-of-Variance (CEV) option pricing model by Cox (1975, 1996) and Cox and Ross (1976), and compare the performances of the CEV and alternative option pricing models, mainly the stochastic volatility model, in terms of European option pricing and cost-accuracy based analysis of their numerical procedures.In European-style option pricing, we have tested the empirical pricing performance of the CEV model and compared the results with those ...

15. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000. -- Highlights: ► The efficiency of APR and APRS methods was compared to two tallying methods. ► The APRS is more efficient than the APR method in track length estimator tallies. ► In the energy deposition tally, both methods have nearly the same efficiency. ► Variance reduction factors of these methods are position and energy dependent.

16. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

2010-01-01

Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

17. Are we underestimating the genetic variances of dimorphic traits?

Wolak, ME; Roff, DA; Fairbairn, DJ

2015-01-01

© 2014 The Authors. Populations often contain discrete classes or morphs (e.g., sexual dimorphisms, wing dimorphisms, trophic dimorphisms) characterized by distinct patterns of trait expression. In quantitative genetic analyses, the different morphs can be considered as different environments within which traits are expressed. Genetic variances and covariances can then be estimated independently for each morph or in a combined analysis. In the latter case, morphs can be considered as separate...

18. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered

19. A randomization-based perspective of analysis of variance: a test statistic robust to treatment effect heterogeneity

Ding, Peng; Dasgupta, Tirthankar

2016-01-01

Fisher randomization tests for Neyman's null hypothesis of no average treatment effects are considered in a finite population setting associated with completely randomized experiments with more than two treatments. The consequences of using the F statistic to conduct such a test are examined both theoretically and computationally, and it is argued that under treatment effect heterogeneity, use of the F statistic can severely inflate the type I error of the Fisher randomization test. An altern...

20. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

Shieh, Gwowen; Jan, Show-Li

2013-01-01

The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However,…

1. A load factor based mean-variance analysis for fuel diversification

Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

2009-03-15

Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

2. Monte-Carlo analysis of rarefied-gas diffusion including variance reduction using the theory of Markov random walks

Perlmutter, M.

1973-01-01

Molecular diffusion through a rarefied gas is analyzed by using the theory of Markov random walks. The Markov walk is simulated on the computer by using random numbers to find the new states from the appropriate transition probabilities. As the sample molecule during its random walk passes a scoring position, which is a location at which the macroscopic diffusing flow variables such as molecular flux and molecular density are desired, an appropriate payoff is scored. The payoff is a function of the sample molecule velocity. For example, in obtaining the molecular flux across a scoring position, the random walk payoff is the net number of times the scoring position has been crossed in the positive direction. Similarly, when the molecular density is required, the payoff is the sum of the inverse velocity of the sample molecule passing the scoring position. The macroscopic diffusing flow variables are then found from the expected payoff of the random walks.

3. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar)

Sonesson, Anna K.; Ødegård, Jørgen; Rönnegård, Lars

2013-01-01

BACKGROUND: Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. RESULTS: Analysis of bo...

4. Uncertainty analysis for 3D geological modeling using the Kriging variance

Choi, Yosoon; Choi, Younjung; Park, Sebeom; Um, Jeong-Gi

2014-05-01

The credible estimation of geological properties is critical in many geosciences fields including the geotechnical engineering, environmental engineering, mining engineering and petroleum engineering. Many interpolation techniques have been developed to estimate the geological properties from limited sampling data such as borehole logs. The Kriging is an interpolation technique that gives the best linear unbiased prediction of the intermediate values. It also provides the Kriging variance which quantifies the uncertainty of the kriging estimates. This study provides a new method to analyze the uncertainty in 3D geological modeling using the Kriging variance. The cut-off values determined by the Kriging variance were used to effectively visualize the 3D geological models with different confidence levels. This presentation describes the method for uncertainty analysis and a case study which evaluates the amount of recoverable resources by considering the uncertainty.

5. Application of an iterative methodology for cross-section and variance/covariance data adjustment to the analysis of fast spectrum systems accounting for non-linearity

until convergence is reached for the analytical values and their uncertainties. An important result of the study is that the asymptotic analytical values of the integral parameters are closer to the experimental values as compared to the standard first adjustment results. Moreover, the asymptotic analytical values seem rather independent of the specific a priori variance/covariance data used in the analysis, namely COMMARA-2.0 or BOLNA, despite different a priori analytical values respectively obtained with JEFF-3.1 or ENDF/B-VI.8 data. The asymptotic uncertainties obtained on the basis of the two libraries are also similar

6. Multivariate Analysis of Variance: Finding significant growth in mice with craniofacial dysmorphology caused by the Crouzon mutation

Thorup, Signe Strann; Ólafsdóttir, Hildur; Darvann, Tron Andre;

2010-01-01

Crouzon syndrome is characterized by growth disturbances caused by premature fusion of the cranial growth zones. A mouse model with mutation Fgfr2C342Y, equivalent to the most common Crouzon syndrome mutation (henceforth called the Crouzon mouse model), has a phenotype showing many parallels to t...

7. Variance component score test for time-course gene set analysis of longitudinal RNA-seq data

Agniel, Denis; Hejblum, Boris P.

2016-01-01

As gene expression measurement technology is shifting from microarrays to sequencing, the statistical tools available for analyzing corresponding data require substantial modifications since RNA-seq data are measured as counts. Recently, it has been proposed to tackle the count nature of these data by modeling log-count reads per million as continuous variables, using nonparametric regression to account for their inherent heteroscedasticity. Adopting such a framework, we propose tcgsaseq, a p...

8. A study of heterogeneity of environmental variance for slaughter weight in pigs

Ibánez-Escriche, N; Varona, L; Sorensen, D; Noguera, J L

2008-01-01

This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those on...... environmental variance have an important influence upon the future economic performance of selected individuals...

9. Estimation of the Conditional Variance in Paired Experiments

Abadie, Alberto; Guido W. IMBENS

2008-01-01

In paired randomized experiments units are grouped in pairs, often based on covariate information, with random assignment within the pairs. Average treatment effects are then estimated by averaging the within-pair differences in outcomes. Typically the variance of the average treatment effect estimator is estimated using the sample variance of the within-pair differences. However, conditional on the covariates the variance of the average treatment effect estimator may be substantially smaller...

10. Analysis of speech-related variance in rapid event-related fMRI using a time-aware acquisition system.

Mehta, S; Grabowski, T J; Razavi, M; Eaton, B; Bolinger, L

2006-02-15

Speech production introduces signal changes in fMRI data that can mimic or mask the task-induced BOLD response. Rapid event-related designs with variable ISIs address these concerns by minimizing the correlation of task and speech-related signal changes without sacrificing efficiency; however, the increase in residual variance due to speech still decreases statistical power and must be explicitly addressed primarily through post-processing techniques. We investigated the timing, magnitude, and location of speech-related variance in an overt picture naming fMRI study with a rapid event-related design, using a data acquisition system that time-stamped image acquisitions, speech, and a pneumatic belt signal on the same clock. Using a spectral subtraction algorithm to remove scanner gradient noise from recorded speech, we related the timing of speech, stimulus presentation, chest wall movement, and image acquisition. We explored the relationship of an extended speech event time course and respiration on signal variance by performing a series of voxelwise regression analyses. Our results demonstrate that these effects are spatially heterogeneous, but their anatomic locations converge across subjects. Affected locations included basal areas (orbitofrontal, mesial temporal, brainstem), areas adjacent to CSF spaces, and lateral frontal areas. If left unmodeled, speech-related variance can result in regional detection bias that affects some areas critically implicated in language function. The results establish the feasibility of detecting and mitigating speech-related variance in rapid event-related fMRI experiments with single word utterances. They further demonstrate the utility of precise timing information about speech and respiration for this purpose. PMID:16412665