WorldWideScience

Sample records for analysis of variance

  1. Naive Analysis of Variance

    Science.gov (United States)

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  2. Nominal analysis of "variance".

    Science.gov (United States)

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  3. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  4. Analysis of Variance: Variably Complex

    Science.gov (United States)

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  5. Warped functional analysis of variance.

    Science.gov (United States)

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  6. Generalized analysis of molecular variance.

    Directory of Open Access Journals (Sweden)

    Caroline M Nievergelt

    2007-04-01

    Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

  7. Analysis of variance for model output

    NARCIS (Netherlands)

    Jansen, M.J.W.

    1999-01-01

    A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va

  8. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  9. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  10. Directional variance analysis of annual rings

    Science.gov (United States)

    Kumpulainen, P.; Marjanen, K.

    2010-07-01

    The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

  11. Analysis of variance of microarray data.

    Science.gov (United States)

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792

  12. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Science.gov (United States)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  13. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  14. Functional analysis of variance for association studies.

    Science.gov (United States)

    Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256

  15. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    Science.gov (United States)

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  16. Variance analysis. Part II, The use of computers.

    Science.gov (United States)

    Finkler, S A

    1991-09-01

    This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788

  17. An Analysis of Variance Framework for Matrix Sampling.

    Science.gov (United States)

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  18. Wavelet Variance Analysis of EEG Based on Window Function

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-zhuang; YOU Rong-yi

    2014-01-01

    A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.

  19. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  20. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  1. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    Science.gov (United States)

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  2. Budget variance analysis using RVUs.

    Science.gov (United States)

    Berlin, M F; Budzynski, M R

    1998-01-01

    This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247

  3. Intuitive Analysis of Variance-- A Formative Assessment Approach

    Science.gov (United States)

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  4. Analysis of variance in spectroscopic imaging data from human tissues.

    Science.gov (United States)

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  5. Analysis of Variance in the Modern Design of Experiments

    Science.gov (United States)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  6. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  7. Analysis of variance tables based on experimental structure.

    Science.gov (United States)

    Brien, C J

    1983-03-01

    A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362

  8. Correct use of repeated measures analysis of variance.

    Science.gov (United States)

    Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

    2009-02-01

    In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

  9. Analysis of variance of an underdetermined geodetic displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Darby, D.

    1982-06-01

    It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

  10. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  11. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

  12. Variance analysis. Part I, Extending flexible budget variance analysis to acuity.

    Science.gov (United States)

    Finkler, S A

    1991-01-01

    The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002

  13. Analysis of variance (ANOVA) models in lower extremity wounds.

    Science.gov (United States)

    Reed, James F

    2003-06-01

    Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

  14. Analysis of variance in neuroreceptor ligand imaging studies.

    Science.gov (United States)

    Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P

    2011-01-01

    Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.

  15. Spectral variance of aeroacoustic data

    Science.gov (United States)

    Rao, K. V.; Preisser, J. S.

    1981-01-01

    An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.

  16. Structure analysis of interstellar clouds: II. Applying the Delta-variance method to interstellar turbulence

    CERN Document Server

    Ossenkopf, V; Stutzki, J

    2008-01-01

    The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. In paper I we proposed essential improvements to the Delta-variance analysis. In this paper we apply the improved Delta-variance analysis to i) a hydrodynamic turbulence simulation with prominent density and velocity structures, ii) an observed intensity map of rho Oph with irregular boundaries and variable uncertainties of the different data points, and iii) a map of the turbulent velocity structure in the Polaris Flare affected by the intensity dependence on the centroid velocity determination. The tests confirm the extended capabilities of the improved Delta-variance analysis. Prominent spatial scales were accurately identified and artifacts from a variable reliability of the data were removed. The analysis of the hydrodynamic simulations showed that the injection of a turbulent velocity structure creates the most prominent density structures are produced on a sca...

  17. Variance heterogeneity analysis for detection of potentially interacting genetic loci: method and its limitations

    Directory of Open Access Journals (Sweden)

    van Duijn Cornelia

    2010-10-01

    Full Text Available Abstract Background Presence of interaction between a genotype and certain factor in determination of a trait's value, it is expected that the trait's variance is increased in the group of subjects having this genotype. Thus, test of heterogeneity of variances can be used as a test to screen for potentially interacting single-nucleotide polymorphisms (SNPs. In this work, we evaluated statistical properties of variance heterogeneity analysis in respect to the detection of potentially interacting SNPs in a case when an interaction variable is unknown. Results Through simulations, we investigated type I error for Bartlett's test, Bartlett's test with prior rank transformation of a trait to normality, and Levene's test for different genetic models. Additionally, we derived an analytical expression for power estimation. We showed that Bartlett's test has acceptable type I error in the case of trait following a normal distribution, whereas Levene's test kept nominal Type I error under all scenarios investigated. For the power of variance homogeneity test, we showed (as opposed to the power of direct test which uses information about known interacting factor that, given the same interaction effect, the power can vary widely depending on the non-estimable direct effect of the unobserved interacting variable. Thus, for a given interaction effect, only very wide limits of power of the variance homogeneity test can be estimated. Also we applied Levene's approach to test genome-wide homogeneity of variances of the C-reactive protein in the Rotterdam Study population (n = 5959. In this analysis, we replicate previous results of Pare and colleagues (2010 for the SNP rs12753193 (n = 21, 799. Conclusions Screening for differences in variances among genotypes of a SNP is a promising approach as a number of biologically interesting models may lead to the heterogeneity of variances. However, it should be kept in mind that the absence of variance heterogeneity for

  18. University student understanding of cancer: analysis of ethnic group variances.

    Science.gov (United States)

    Estaville, Lawrence; Trad, Megan; Martinez, Gloria

    2012-06-01

    Traditional university and college students ages 18-24 are traversing an important period in their lives in which behavioral intervention is critical in reducing their risk of cancer in later years. The study's purpose was to determine the perceptions and level of knowledge about cancer of white, Hispanic, and black university students (n=958). Sources of student information about cancer were also identified. The survey results showed all students know very little about cancer and their perceptions of cancer are bad with many students thinking that cancer and death are synonymous. We also discovered university students do not discuss cancer often in their classrooms nor with their family or friends. Moreover, university students are unlikely to perform monthly or even yearly self-examinations for breast or testicular cancers; black students have the lowest rate of self-examinations. PMID:22477236

  19. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    Science.gov (United States)

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  20. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  1. Gender variance on campus : a critical analysis of transgender voices

    OpenAIRE

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn, 2005; Beemyn, Curtis, Davis, & Tubbs, 2005). This study examined the perceptions of transgender inclusion, ways in which leadership structures or entiti...

  2. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  3. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    Science.gov (United States)

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  4. Variance Analysis of Unevenly Spaced Time Series Data

    Science.gov (United States)

    Hackman, Christine; Parker, Thomas E.

    1996-01-01

    We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.

  5. Analysis of variance and functional measurement a practical guide

    CERN Document Server

    Weiss, David J

    2006-01-01

    Chapter I. IntroductionChapter II. One-way ANOVAChapter III. Using the ComputerChapter IV. Factorial StructureChapter V. Two-way ANOVA Chapter VI. Multi-factor DesignsChapter VII. Error Purifying DesignsChapter VIII. Specific ComparisonsChapter IX. Measurement IssuesChapter X. Strength of Effect**Chapter XI. Nested Designs**Chapter XII. Missing Data**Chapter XIII. Confounded Designs**Chapter XIV. Introduction to Functional Measurement**Terms from Introductory Statistics References Subject Index Name Index

  6. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    Science.gov (United States)

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  7. A Demonstration of the Analysis of Variance Using Physical Movement and Space

    Science.gov (United States)

    Owen, William J.; Siakaluk, Paul D.

    2011-01-01

    Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

  8. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    Science.gov (United States)

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  9. Application of the analysis of variance for the determination of reinforcement structure homogeneity in MMC

    OpenAIRE

    K. Gawdzińska; S. Berczyński; M. Pelczar; J. Grabian

    2010-01-01

    These authors propose a new definition of homogeneity verified by variance analysis. The analysis aimed at quantitative variables describing the homogeneity of reinforcement structure, i.e. surface areas, reinforcement phase diameter and percentage of reinforcement area contained in a circle within a given region. The examined composite material consisting of silicon carbide reinforcement particles in AlSi11 alloy matrix was made by mechanical mixing.

  10. A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance

    Science.gov (United States)

    Liu, Xiaofeng Steven

    2010-01-01

    The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…

  11. Structure analysis of interstellar clouds - I. Improving the Delta-variance method

    NARCIS (Netherlands)

    Ossenkopf, V.; Krips, M.; Stutzki, J.

    2008-01-01

    Context. The Delta-variance analysis, introduced as a wavelet-based measure for the statistical scaling of structures in astronomical maps, has proven to be an efficient and accurate method of characterising the power spectrum of interstellar turbulence. It has been applied to observed molecular clo

  12. Structure analysis of interstellar clouds - II. Applying the Delta-variance method to interstellar turbulence

    NARCIS (Netherlands)

    Ossenkopf, V.; Krips, M.; Stutzki, J.

    2008-01-01

    Context. The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. It has been applied both to simulations of interstellar turbulence and to observed molecular cloud maps. In Paper I we proposed essential improvem

  13. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    Science.gov (United States)

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  14. Toward an objective evaluation of teacher performance: The use of variance partitioning analysis, VPA.

    Directory of Open Access Journals (Sweden)

    Eduardo R. Alicias

    2005-05-01

    Full Text Available Evaluation of teacher performance is usually done with the use of ratings made by students, peers, and principals or supervisors, and at times, selfratings made by the teachers themselves. The trouble with this practice is that it is obviously subjective, and vulnerable to what Glass and Martinez call the "politics of teacher evaluation," as well as to professional incapacities of the raters. The value-added analysis (VAA model is one attempt to make evaluation objective and evidenced-based. However, the VAA model'especially that of the Tennessee Value Added Assessment System (TVAAS developed by William Sanders'appears flawed essentially because it posits the untenable assumption that the gain score of students (value added is attributable only and only to the teacher(s, ignoring other significant explanators of student achievement like IQ and socio-economic status. Further, the use of the gain score (value-added as a dependent variable appears hobbled with the validity threat called "statistical regression," as well as the problem of isolating the conflated effects of two or more teachers. The proposed variance partitioning analysis (VPA model seeks to partition the total variance of the dependent variable (post-test student achievement into various portions representing: first, the effects attributable to the set of teacher factors; second, effects attributable to the set of control variables the most important of which are IQ of the student, his pretest score on that particular dependent variable, and some measures of his socio-economic status; and third, the unexplained effects/variance. It is not difficult to see that when the second and third quanta of variance are partitioned out of the total variance of the dependent variable, what remains is that attributable to the teacher. Two measures of teacher effect are hereby proposed: the proportional teacher effect and the direct teacher effect.

  15. Publishing nutrition research: a review of multivariate techniques--part 2: analysis of variance.

    Science.gov (United States)

    Harris, Jeffrey E; Sheean, Patricia M; Gleason, Philip M; Bruemmer, Barbara; Boushey, Carol

    2012-01-01

    This article is the eighth in a series exploring the importance of research design, statistical analysis, and epidemiology in nutrition and dietetics research, and the second in a series focused on multivariate statistical analytical techniques. The purpose of this review is to examine the statistical technique, analysis of variance (ANOVA), from its simplest to multivariate applications. Many dietetics practitioners are familiar with basic ANOVA, but less informed of the multivariate applications such as multiway ANOVA, repeated-measures ANOVA, analysis of covariance, multiple ANOVA, and multiple analysis of covariance. The article addresses all these applications and includes hypothetical and real examples from the field of dietetics.

  16. Analysis of variance of quantitative parameters bidders offers for public procurement in the chosen sector

    OpenAIRE

    Gavorníková, Katarína

    2012-01-01

    Goal of this work was to found out which determinants and in what direction influence variance of price biddings offered by bidders for public procurement, as well as their behavior during selection process. This work aimed on public procurement for construction works declared by municipal procurement authority. Regression analysis confirmed the variable estimated price and ratio of final and estimated price of public procurement as the strongest influences. Increasing estimated price raises ...

  17. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

    DEFF Research Database (Denmark)

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M;

    2014-01-01

    in a ROI making noise management critical to successful exploratory analysis. This work explores how preprocessing choices affect the bias and variability of voxelwise kinetic modeling analysis of brain positron emission tomography (PET) data. These choices include the use of volume- or cortical surface...... radioligand ([(11)C]SB2307145) were collected on sixteen healthy subjects using a Siemens HRRT PET scanner. Kinetic modeling was used to compute maps of non-displaceable binding potential (BPND) after preprocessing. The results showed a complicated interaction between smoothing, PVC, and masking on BPND...... intersubject variance than when volume smoothing was used. This translates into more than 4 times fewer subjects needed in a group analysis to achieve similarly powered statistical tests. Surface-based smoothing has less bias and variance because it respects cortical geometry by smoothing the PET data only...

  18. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  19. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  20. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  1. Comparison of performance between rescaled range analysis and rescaled variance analysis in detecting abrupt dynamic change

    Institute of Scientific and Technical Information of China (English)

    何文平; 刘群群; 姜允迪; 卢莹

    2015-01-01

    In the present paper, a comparison of the performance between moving cutting data-rescaled range analysis (MC-R/S) and moving cutting data-rescaled variance analysis (MC-V/S) is made. The results clearly indicate that the operating efficiency of the MC-R/S algorithm is higher than that of the MC-V/S algorithm. In our numerical test, the computer time consumed by MC-V/S is approximately 25 times that by MC-R/S for an identical window size in artificial data. Except for the difference in operating efficiency, there are no significant differences in performance between MC-R/S and MC-V/S for the abrupt dynamic change detection. MC-R/S and MC-V/S both display some degree of anti-noise ability. However, it is important to consider the influences of strong noise on the detection results of MC-R/S and MC-V/S in practical application processes.

  2. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis

    Science.gov (United States)

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of ...

  3. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

    Science.gov (United States)

    Peng, Yichuan; Lord, Dominique; Zou, Yajie

    2014-12-01

    As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model. PMID:25173723

  4. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis.

    Science.gov (United States)

    Luthria, Devanand L; Lin, Long-Ze; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-11-12

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of two cultivars of broccoli, Majestic and Legacy, the first grown with four different levels of Se and the second grown organically and conventionally with two rates of irrigation. Chemical composition differences in the two cultivars and seven treatments produced patterns that were visually and statistically distinguishable using ANOVA-PCA. PCA loadings allowed identification of the molecular and fragment ions that provided the most significant chemical differences. A standardized profiling method for phenolic compounds showed that important discriminating ions were not phenolic compounds. The elution times of the discriminating ions and previous results suggest that they were common sugars and organic acids. ANOVA calculations of the positive and negative ionization MS fingerprints showed that 33% of the variance came from the cultivar, 59% from the growing treatment, and 8% from analytical uncertainty. Although the positive and negative ionization fingerprints differed significantly, there was no difference in the distribution of variance. High variance of individual masses with cultivars or growing treatment was correlated with high PCA loadings. The ANOVA data suggest that only variables with high variance for analytical uncertainty should be deleted. All other variables represent discriminating masses that allow separation of the samples with respect to cultivar and treatment.

  5. THE EFFECTS OF DISAGGREGATED SAVINGS ON ECONOMIC GROWTH IN MALAYSIA - GENERALISED VARIANCE DECOMPOSITION ANALYSIS

    OpenAIRE

    Chor Foon Tang; Hooi Hooi Lean

    2009-01-01

    This study examines how much of the variance in economic growth can be explained by various categories of domestic and foreign savings in Malaysia. The bounds testing approach to cointegration and the generalised forecast error variance decomposition technique was used to achieve the objective of this study. The cointegration test results demonstrate that the relationship between economic growth and savings in Malaysia are stable and coalescing in the long run. The variance decomposition find...

  6. Methods and applications of linear models regression and the analysis of variance

    CERN Document Server

    Hocking, Ronald R

    2013-01-01

    Praise for the Second Edition"An essential desktop reference book . . . it should definitely be on your bookshelf." -Technometrics A thoroughly updated book, Methods and Applications of Linear Models: Regression and the Analysis of Variance, Third Edition features innovative approaches to understanding and working with models and theory of linear regression. The Third Edition provides readers with the necessary theoretical concepts, which are presented using intuitive ideas rather than complicated proofs, to describe the inference that is appropriate for the methods being discussed. The book

  7. Structure analysis of simulated molecular clouds with the Delta-variance

    CERN Document Server

    Bertram, Erik; Glover, Simon C O

    2015-01-01

    We employ the Delta-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n_0 = 30, 100 and 300 cm^{-3} that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Delta-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 -> 0) lines. The spectral slopes of the Delta-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth-size relation ranging from 0.4 to 0....

  8. Analysis of variance: is there a difference in means and what does it mean?

    Science.gov (United States)

    Kao, Lillian S; Green, Charles E

    2008-01-01

    To critically evaluate the literature and to design valid studies, surgeons require an understanding of basic statistics. Despite the increasing complexity of reported statistical analyses in surgical journals and the decreasing use of inappropriate statistical methods, errors such as in the comparison of multiple groups still persist. This review introduces the statistical issues relating to multiple comparisons, describes the theoretical basis behind analysis of variance (ANOVA), discusses the essential differences between ANOVA and multiple t-tests, and provides an example of the computations and computer programming used in performing ANOVA.

  9. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model.

    Science.gov (United States)

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com.

  10. Identification of mitochondrial proteins of malaria parasite using analysis of variance.

    Science.gov (United States)

    Ding, Hui; Li, Dongmei

    2015-02-01

    As a parasitic protozoan, Plasmodium falciparum (P. falciparum) can cause malaria. The mitochondrial proteins of malaria parasite play important roles in the discovery of anti-malarial drug targets. Thus, accurate identification of mitochondrial proteins of malaria parasite is a key step for understanding their functions and finding potential drug targets. In this work, we developed a sequence-based method to identify the mitochondrial proteins of malaria parasite. At first, we extended adjoining dipeptide composition to g-gap dipeptide composition for discretely formulating the protein sequences. Subsequently, the analysis of variance (ANOVA) combined with incremental feature selection (IFS) was used to pick out the optimal features. Finally, the jackknife cross-validation was used to evaluate the performance of the proposed model. Evaluation results showed that the maximum accuracy of 97.1% could be achieved by using 101 optimal 5-gap dipeptides. The comparison with previous methods demonstrated that our method was accurate and efficient.

  11. Modeling the Variance of Variance Through a Constant Elasticity of Variance Generalized Autoregressive Conditional Heteroskedasticity Model

    OpenAIRE

    Saedi, Mehdi; Wolk, Jared

    2012-01-01

    This paper compares a standard GARCH model with a Constant Elasticity of Variance GARCH model across three major currency pairs and the S&P 500 index. We discuss the advantages and disadvantages of using a more sophisticated model designed to estimate the variance of variance instead of assuming it to be a linear function of the conditional variance. The current stochastic volatility and GARCH analogues rest upon this linear assumption. We are able to confirm through empirical estimation ...

  12. Complex evaluation of moulding sand properties by multi-factor analysis of variance

    Directory of Open Access Journals (Sweden)

    A. Smoliński

    2008-08-01

    Full Text Available The article presents the statistical evaluation of selected properties of moulding sands with additions of various binders. A utilitarian objective of this study was to determine the possibility of using coal dust as an additive to sands to protect castings made in these sands from the burn on defects. Another objective of the study was to investigate the possibilities to eliminate the protective coatings in view of a high cost of their application. The investigations were carried out on mixtures based on silica sand with binders, i.e. P26 flocculant – a complex compound of vegetable origin, and Gitar – a waste material formed during manufacture of hydrogen cyanid, with an addition of coal dust. Applying the multi-factor analysis of variance, a complex effect of the sand chemical composition, and of the drying time and temperature on dry compression strength Rcs was analysed

  13. Analysis of variance on thickness and electrical conductivity measurements of carbon nanotube thin films

    Science.gov (United States)

    Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard

    2016-09-01

    One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.

  14. [Discussion of errors and measuring strategies in morphometry using analysis of variance].

    Science.gov (United States)

    Rother, P; Jahn, W; Fitzl, G; Wallmann, T; Walter, U

    1986-01-01

    Statistical techniques known as the analysis of variance make it possible for the morphologist to plan work in such a way as to get quantitative data with the greatest possible economy of effort. This paper explains how to decide how many measurements to make per micrograph, how many micrographs per tissue block or organ, and how many organs or individuals are necessary for getting an exactness of sufficient quality of the results. The examples furnished have been taken from measuring volume densities of mitochondria in heart muscle cells and from cell counting in lymph nodes. Finally we show, how to determine sample sizes, if we are interested in demonstration of significant differences between mean values. PMID:3569811

  15. New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces

    Science.gov (United States)

    Sidick, Erkin

    2010-01-01

    Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).

  16. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review

    CERN Document Server

    Malkin, Zinovy

    2016-01-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing of the frequency standards deviations. For the past decades, AVAR has increasingly being used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with the clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. Besides, some physically connected scalar time series naturally form series of multi-dimensional vectors. For example, three station coordinates time series $X$, $Y$, and $Z$ can be combined to analyze 3D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multi-dimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multi-dimensional AVAR (MAVAR), and weighted multi-dimensional AVAR (WMAVAR), were introduced to overcome these ...

  17. Analysis of variance of communication latencies in anesthesia: comparing means of multiple log-normal distributions.

    Science.gov (United States)

    Ledolter, Johannes; Dexter, Franklin; Epstein, Richard H

    2011-10-01

    Anesthesiologists rely on communication over periods of minutes. The analysis of latencies between when messages are sent and responses obtained is an essential component of practical and regulatory assessment of clinical and managerial decision-support systems. Latency data including times for anesthesia providers to respond to messages have moderate (> n = 20) sample sizes, large coefficients of variation (e.g., 0.60 to 2.50), and heterogeneous coefficients of variation among groups. Highly inaccurate results are obtained both by performing analysis of variance (ANOVA) in the time scale or by performing it in the log scale and then taking the exponential of the result. To overcome these difficulties, one can perform calculation of P values and confidence intervals for mean latencies based on log-normal distributions using generalized pivotal methods. In addition, fixed-effects 2-way ANOVAs can be extended to the comparison of means of log-normal distributions. Pivotal inference does not assume that the coefficients of variation of the studied log-normal distributions are the same, and can be used to assess the proportional effects of 2 factors and their interaction. Latency data can also include a human behavioral component (e.g., complete other activity first), resulting in a bimodal distribution in the log-domain (i.e., a mixture of distributions). An ANOVA can be performed on a homogeneous segment of the data, followed by a single group analysis applied to all or portions of the data using a robust method, insensitive to the probability distribution.

  18. Analysis of variance with unbalanced data: an update for ecology & evolution.

    Science.gov (United States)

    Hector, Andy; von Felten, Stefanie; Schmid, Bernhard

    2010-03-01

    1. Factorial analysis of variance (anova) with unbalanced (non-orthogonal) data is a commonplace but controversial and poorly understood topic in applied statistics. 2. We explain that anova calculates the sum of squares for each term in the model formula sequentially (type I sums of squares) and show how anova tables of adjusted sums of squares are composite tables assembled from multiple sequential analyses. A different anova is performed for each explanatory variable or interaction so that each term is placed last in the model formula in turn and adjusted for the others. 3. The sum of squares for each term in the analysis can be calculated after adjusting only for the main effects of other explanatory variables (type II sums of squares) or, controversially, for both main effects and interactions (type III sums of squares). 4. We summarize the main recent developments and emphasize the shift away from the search for the 'right'anova table in favour of presenting one or more models that best suit the objectives of the analysis.

  19. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  20. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    Energy Technology Data Exchange (ETDEWEB)

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  1. The term structure of variance swap rates and optimal variance swap investments

    OpenAIRE

    Egloff, Daniel; Leippold, Markus; Liuren WU

    2010-01-01

    This paper performs specification analysis on the term structure of variance swap rates on the S&P 500 index and studies the optimal investment decision on the variance swaps and the stock index. The analysis identifies two stochastic variance risk factors, which govern the short and long end of the variance swap term structure variation, respectively. The highly negative estimate for the market price of variance risk makes it optimal for an investor to take short positions in a short-term va...

  2. FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons

    Science.gov (United States)

    Fordyce, Michael W.

    1977-01-01

    A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)

  3. Analysis of the anomalous scale-dependent behavior of dispersivity using straightforward analytical equations: Flow variance vs. dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Looney, B.B. [E.I. du Pont de Nemours and Co., Aiken, SC (United States). Savannah River Lab.; Scott, M.T. [Clemson Univ., SC (United States)

    1988-12-31

    Recent field and laboratory data have confirmed that apparent dispersivity is a function of the flow distance of the measurement. This scale effect is not consistent with classical advection dispersion modeling often used to describe the transport of solutes in saturated porous media. Many investigators attribute this anomalous behavior to the fact that the spreading of solute is actually the result of the heterogeneity of subsurface materials and the wide distribution of flow paths and velocities available in such systems. An analysis using straightforward analytical equations confirms this hypothesis. An analytical equation based on a flow variance approach matches available field data when a variance description of approximately 0.4 is employed. Also, current field data provide a basis for statistical selection of the variance parameter based on the level of concern related to the resulting calculated concentration. While the advection dispersion approach often yielded reasonable predictions, continued development of statistical and stochastic techniques will provide more defendable and mechanistically descriptive models.

  4. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    Science.gov (United States)

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series. PMID:26540681

  5. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  6. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA)

    OpenAIRE

    Rakesh,; Satish Kumar

    2015-01-01

    The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG) of Mild Steel 1018 grade by Analysis of Variance (ANOVA). The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which c...

  7. Variance Risk Premiums and Predictive Power of Alternative Forward Variances in the Corn Market

    OpenAIRE

    Zhiguang Wang; Scott W. Fausti; Qasmi, Bashir A.

    2010-01-01

    We propose a fear index for corn using the variance swap rate synthesized from out-of-the-money call and put options as a measure of implied variance. Previous studies estimate implied variance based on Black (1976) model or forecast variance using the GARCH models. Our implied variance approach, based on variance swap rate, is model independent. We compute the daily 60-day variance risk premiums based on the difference between the realized variance and implied variance for the period from 19...

  8. Explaining the Variance of Price Dividend Ratios

    OpenAIRE

    Cochrane, John H.

    1989-01-01

    This paper presents a bound on the variance of the price-dividend ratio and a decomposition of the variance of the price-dividend ratio into components that reflect variation in expected future discount rates and variation in expected future dividend growth. Unobserved discount rates needed to make the variance bound and variance decomposition hold are characterized, and the variance bound and variance decomposition are tested for several discount rate models, including the consumption based ...

  9. Longitudinal Analysis of Residual Feed Intake in Mink using Random Regression with Heterogeneous Residual Variance

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik;

    Heritability of residual feed intake (RFI) increased from low to high over the growing period in male and female mink. The lowest heritability for RFI (male: 0.04 ± 0.01 standard deviation (SD); female: 0.05 ± 0.01 SD) was in early and the highest heritability (male: 0.33 ± 0.02; female: 0.34 ± 0.......02 SD) was achieved at the late growth stages. The genetic correlation between different growth stages for RFI showed a high association (0.91 to 0.98) between early and late growing periods. However, phenotypic correlations were lower from 0.29 to 0.50. The residual variances were substantially higher...... at the end compared to the early growing period suggesting that heterogeneous residual variance should be considered for analyzing feed efficiency data in mink. This study suggests random regression methods are suitable for analyzing feed efficiency and that genetic selection for RFI in mink is...

  10. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  11. FMRI group analysis combining effect estimates and their variances

    OpenAIRE

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Michael S Beauchamp; Cox, Robert W.

    2011-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an a...

  12. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  13. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    OpenAIRE

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general...

  14. On variance estimate for covariate adjustment by propensity score analysis.

    Science.gov (United States)

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  15. Comments on the statistical analysis of excess variance in the COBE differential microwave radiometer maps

    Science.gov (United States)

    Wright, E. L.; Smoot, G. F.; Kogut, A.; Hinshaw, G.; Tenorio, L.; Lineweaver, C.; Bennett, C. L.; Lubin, P. M.

    1994-01-01

    Cosmic anisotrophy produces an excess variance sq sigmasky in the Delta maps produced by the Differential Microwave Radiometer (DMR) on cosmic background explorer (COBE) that is over and above the instrument noise. After smoothing to an effective resolution of 10 deg, this excess sigmasky(10 deg), provides an estimate for the amplitude of the primordial density perturbation power spectrum with a cosmic uncertainty of only 12%. We employ detailed Monte Carlo techniques to express the amplitude derived from this statistic in terms of the universal root mean square (rms) quadrupole amplitude, (Q sq/RMS)0.5. The effects of monopole and dipole subtraction and the non-Gaussian shape of the DMR beam cause the derived (Q sq/RMS)0.5 to be 5%-10% larger than would be derived using simplified analytic approximations. We also investigate the properties of two other map statistics: the actual quadrupole and the Boughn-Cottingham statistic. Both the sigmasky(10 deg) statistic and the Boughn-Cottingham statistic are consistent with the (Q sq/RMS)0.5 = 17 +/- 5 micro K reported by Smoot et al. (1992) and Wright et al. (1992).

  16. Spectral and chromatographic fingerprinting with analysis of variance-principal component analysis (ANOVA-PCA): a useful tool for differentiating botanicals and characterizing sources of variance

    Science.gov (United States)

    Objectives: Spectral fingerprints, acquired by direct injection (no separation) mass spectrometry (DI-MS) or liquid chromatography with UV detection (HPLC), in combination with ANOVA-PCA, were used to differentiate 15 powders of botanical materials. Materials and Methods: Powders of 15 botanical mat...

  17. The Variance of Language in Different Contexts

    Institute of Scientific and Technical Information of China (English)

    申一宁

    2012-01-01

    language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.

  18. An introduction to analysis of variance (ANOVA) with special reference to data from clinical experiments in optometry.

    Science.gov (United States)

    Armstrong, R A; Slade, S V; Eperjesi, F

    2000-05-01

    This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed.

  19. MEASURING DRIVERS’ EFFECT IN A COST MODEL BY MEANS OF ANALYSIS OF VARIANCE

    Directory of Open Access Journals (Sweden)

    Maria Elena Nenni

    2013-01-01

    Full Text Available In this study the author goes through with the analysis of a cost model developed for Integrated Logistic Support (ILS activities. By means of ANOVA the evaluation of impact and interaction among cost drivers is done. The predominant importance of organizational factors compared to technical ones is definitely demonstrated. Moreover the paper provides researcher and practitioners with useful information to improve the cost model as well as for budgeting and financial planning of ILS activities.

  20. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  1. Cyclostationary analysis with logarithmic variance stabilisation

    Science.gov (United States)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  2. An analysis of the factors generating the variance between the budgeted and actual operating results of the Naval Aviation Depot at North Island, California

    OpenAIRE

    Curran, Thomas; Schimpff, Joshua J.

    2008-01-01

    For six of the past eight years, naval aviation depot-level maintenances activities have encountered operating losses that were not anticipated in the Navy Working Capital Fund (NWCF) budgets. These unanticipated losses resulted in increases or surcharges to the stabilized rates as an offset. This project conducts a variance analysis to uncover possible causes of the unanticipated losses. The variance analysis between budgeted (projected) and actual financial results was performed on fina...

  3. Determination of Significant Process Parameter in Metal Inert Gas Welding of Mild Steel by using Analysis of Variance (ANOVA

    Directory of Open Access Journals (Sweden)

    Rakesh

    2015-11-01

    Full Text Available The aim of present study is to determine the most significant input parameter such as welding current, arc voltage and root gap during the Metal Inert Gas Welding (MIG of Mild Steel 1018 grade by Analysis of Variance (ANOVA. The hardness and tensile strength of weld specimen are investigated in this study. The selected three input parameters were varied at three levels. On the analogy, nine experiments were performed based on L9 orthogonal array of Taguchi’s methodology, which consist three input parameters. Root gap has greatest effect on tensile strength followed by welding current and arc voltage. Arc voltage has greatest effect on hardness followed by root gap and welding current. Weld metal consists of fine grains of ferrite and pearlite.

  4. 使用SPSS软件进行多因素方差分析%Application of SPSS Software in Multivariate Analysis of Variance

    Institute of Scientific and Technical Information of China (English)

    龚江; 石培春; 李春燕

    2012-01-01

    以两因素完全随机有重复的试验为例,阐述用SPSS软进行方差分析的详细过程,包括数据的输入、变异来源的分析,方差分析结果,以及显著性检验,最后还对方差分析注意事项进行分析,为科技工作者使用SPSS软进方差分析提供参考。%An example about two factors multiple completely random design analysis of variance was given and the detailed process of analysis of variance in SPSS software was elaborated,including the data input,he source analysis of the variance,the result of analysis of variance,the test of significance,etc.At last,precautions on the analysis of variance with SPSS software were given,providing references to the analysis of variance with SPSS software for scientific research workers.

  5. Biomarker profiling and reproducibility study of MALDI-MS measurements of Escherichia coli by analysis of variance-principal component analysis.

    Science.gov (United States)

    Chen, Ping; Lu, Yao; Harrington, Peter B

    2008-03-01

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) has proved useful for the characterization of bacteria and the detection of biomarkers. Key challenges for MALDI-MS measurements of bacteria are overcoming the relatively large variability in peak intensities. A soft tool, combining analysis of variance and principal component analysis (ANOVA-PCA) (Harrington, P. D.; Vieira, N. E.; Chen, P.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Chemom. Intell. Lab. Syst. 2006, 82, 283-293. Harrington, P. D.; Vieira, N. E.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Anal. Chim. Acta. 2005, 544, 118-127) was applied to investigate the effects of the experimental factors associated with MALDI-MS studies of microorganisms. The variance of the measurements was partitioned with ANOVA and the variance of target factors combined with the residual error was subjected to PCA to provide an easy to understand statistical test. The statistical significance of these factors can be visualized with 95% Hotelling T2 confidence intervals. ANOVA-PCA is useful to facilitate the detection of biomarkers in that it can remove the variance corresponding to other experimental factors from the measurements that might be mistaken for a biomarker. Four strains of Escherichia coli at four different growth ages were used for the study of reproducibility of MALDI-MS measurements. ANOVA-PCA was used to disclose potential biomarker proteins associated with different growth stages.

  6. Analysis of NDVI variance across landscapes and seasons allows assessment of degradation and resilience to shocks in Mediterranean dry ecosystems

    Science.gov (United States)

    liniger, hanspeter; jucker riva, matteo; schwilch, gudrun

    2016-04-01

    Mapping and assessment of desertification is a primary basis for effective management of dryland ecosystems. Vegetation cover and biomass density are key elements for the ecological functioning of dry ecosystem, and at the same time an effective indicator of desertification, land degradation and sustainable land management. The Normalized Difference Vegetation Index (NDVI) is widely used to estimate the vegetation density and cover. However, the reflectance of vegetation and thus the NDVI values are influenced by several factors such as type of canopy, type of land use and seasonality. For example low NDVI values could be associated to a degraded forest, to a healthy forest under dry climatic condition, to an area used as pasture, or to an area managed to reduce the fuel load. We propose a simple method to analyse the variance of NDVI signal considering the main factors that shape the vegetation. This variance analysis enables us to detect and categorize degradation in a much more precise way than simple NDVI analysis. The methodology comprises identifying homogeneous landscape areas in terms of aspect, slope, land use and disturbance regime (if relevant). Secondly, the NDVI is calculated from Landsat multispectral images and the vegetation potential for each landscape is determined based on the percentile (highest 10% value). Thirdly, the difference between the NDVI value of each pixel and the potential is used to establish degradation categories . Through this methodology, we are able to identify realistic objectives for restoration, allowing a targeted choice of management options for degraded areas. For example, afforestation would only be done in areas that show potential for forest growth. Moreover, we can measure the effectiveness of management practices in terms of vegetation growth across different landscapes and conditions. Additionally, the same methodology can be applied to a time series of multispectral images, allowing detection and quantification of

  7. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

    OpenAIRE

    VÎLCU Adrian; HRISTIAN Liliana; BORDEIANU Demetra Lăcrămioara

    2014-01-01

    The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA) can be employed. ANCOVA shows the corre...

  8. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

  9. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.

  10. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

    Directory of Open Access Journals (Sweden)

    VÎLCU Adrian

    2014-05-01

    Full Text Available The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA can be employed. ANCOVA shows the correlation between a dependent variable and the covariate independent variables and removes the variability from the dependent variable that can be accounted by the covariates. The independent and dependent variable structures for Multiple Regression, factorial ANOVA and ANCOVA tests are similar. ANCOVA is differentiated from the other two in that it is used when the researcher wants to neutralize the effect of a continuous independent variable in the experiment. The researcher may simply not be interested in the effect of a given independent variable when performing a study. Another situation where ANCOVA should be applied is when an independent variable has a strong correlation with the dependent variable, but does not interact with other independent variables in predicting the dependent variable’s value. ANCOVA is used to neutralize the effect of the more powerful, non-interacting variable. Without this intervention measure, the effects of interacting independent variables can be clouded

  11. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  12. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  13. Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse

    Institute of Scientific and Technical Information of China (English)

    Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou

    2016-01-01

    In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.

  14. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    Science.gov (United States)

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

  15. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    Science.gov (United States)

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss.

  16. The Benefit of Regional Diversification of Cogeneration Investments in Europe: A Mean-Variance Portfolio Analysis

    OpenAIRE

    Westner, Günther; Madlener, Reinhard

    2009-01-01

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standar...

  17. Budget variance analysis of a departmentwide implementation of a PACS at a major academic medical center.

    Science.gov (United States)

    Reddy, Arra Suresh; Loh, Shaun; Kane, Robert A

    2006-01-01

    In this study, the costs and cost savings associated with departmentwide implementation of a picture archiving and communication system (PACS) as compared to the projected budget at the time of inception were evaluated. An average of $214,460 was saved each year with a total savings of $1,072,300 from 1999 to 2003, which is significantly less than the $2,943,750 projected savings. This discrepancy can be attributed to four different factors: (1) overexpenditures, (2) insufficient cost savings, (3) unanticipated costs, and (4) project management issues. Although the implementation of PACS leads to cost savings, actual savings will be much lower than expected unless extraordinary care is taken when devising the budget. PMID:16946989

  18. Multiplicative correction of subject effect as preprocessing for analysis of variance.

    Science.gov (United States)

    Nemoto, Iku; Abe, Masaya; Kotani, Makoto

    2008-03-01

    The procedure of repeated-measures ANOVA assumes the linear model in which effects of both subjects and experimental conditions are additive. However, in electroencephalography and magnetoencephalography, there may be situations where subject effects should be considered to be multiplicative in amplitude. We propose a simple method to normalize such data by multiplying each subject's response by a subject-specific constant. This paper derives ANOVA tables for such normalized data. Present simulations show that this method performs ANOVA effectively including multiple comparisons provided that the data follows the multiplicative model.

  19. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean-variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums....... Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...

  20. Two-dimensional finite-element temperature variance analysis

    Science.gov (United States)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  1. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  2. A variance analysis of the capacity displaced by wind energy in Europe

    DEFF Research Database (Denmark)

    Giebel, Gregor

    2007-01-01

    Wind energy generation distributed all over Europe is less variable than generation from a single region. To analyse the benefits of distributed generation, the whole electrical generation system of Europe has been modelled including varying penetrations of wind power. The model is chronologically...... simulating the scheduling of the European power plants to cover the demand at every hour of the year. The wind power generation was modelled using wind speed measurements from 60 meteorological stations, for 1 year. The distributed wind power also displaces fossil-fuelled capacity. However, every assessment...... of the displaced capacity (or a capacity credit) by means of a chronological model is highly sensitive to single events. Therefore the wind time series was shifted by integer days against the load time series, and the different results were aggregated. The some set of results is shown for two other options, one...

  3. Analysis of Variance in Vocabulary Learning Strategies Theory and Practice: A Case Study in Libya

    Directory of Open Access Journals (Sweden)

    Salma H M Khalifa

    2016-06-01

    Full Text Available The present study is an outcome of a concern for the teaching of English as a foreign language (EFL in Libyan schools. Learning of a foreign language is invariably linked to learners building a good repertoire of vocabulary of the target language, which takes us to the theory and practice of imparting training in vocabulary learning strategies (VLSs to learners. The researcher observed that there exists a divergence in theoretical knowledge of VLSs and practically training learners in using the strategies in EFL classes in Libyan schools. To empirically examine the situation, a survey was conducted with secondary school English teachers. The study discusses the results of the survey. The results show that teachers of English in secondary school in Libya are either not aware of various vocabulary learning strategies, or if they are, they do not impart training in all VLSs as they do not realize that to achieve good results in language learning, a judicious use of all VLSs is required. Though the study was conducted on a small scale, the results are highly encouraging.Keywords: vocabulary learning strategies, vocabulary learning theory, teaching of vocabulary learning strategies

  4. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...... genetic variance. However, in Holstein cattle, a group of genes that explained close to none of the genetic variance could also have a high likelihood ratio. This is still a good separation of signal and noise, but instead of capturing the genetic signal in the marker set being tested, we are instead...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  5. FMRI group analysis combining effect estimates and their variances.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W

    2012-03-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  6. Analysis of variance in determinations of equivalence volume and of the ionic product of water in potentiometric titrations.

    Science.gov (United States)

    Braibanti, A; Bruschi, C; Fisicaro, E; Pasquali, M

    1986-06-01

    Homogeneous sets of data from strong acid-strong base potentiometric titrations in aqueous solution at various constant ionic strengths have been analysed by statistical criteria. The aim is to see whether the error distribution matches that for the equilibrium constants determined by competitive potentiometric methods using the glass electrode. The titration curve can be defined when the estimated equivalence volume VEM, with standard deviation (s.d.) sigma (VEM), the standard potential E(0), with s.d. sigma(E(0)), and the operational ionic product of water K(*)(w) (or E(*)(w) in mV), with s.d. sigma(K(*)(w)) [or sigma(E(*)(w))] are known. A special computer program, BEATRIX, has been written which optimizes the values of VEM, E(0) and K(*)(w) by linearization of the titration curve as a Gran plot. Analysis of variance applied to a set of 11 titrations in 1.0M sodium chloride medium at 298 K has demonstrated that the values of VEM belong to a normal population of points corresponding to individual potential/volume data-pairs (E(i); v(i)) of any titration, whereas the values of pK(*)(w) (or of E(*)(w)) belong to a normal population with members corresponding to individual titrations, which is also the case for the equilibrium constants. The intertitration variation is attributable to the electrochemical component of the system and appears as signal noise distributed over the titrations. The correction for junction-potentials, introduced in a further stage of the program by optimization in a Nernst equation, increases the noise, i.e., sigma(pK(*)(w)). This correction should therefore be avoided whenever it causes an increase of sigma(pK(*)(w)). The influence of the ionic medium has been examined by processing data from acid-base titrations in 0.1M potassium chloride and 0.5M potassium nitrate media. The titrations in potassium chloride medium showed the same behaviour as those in sodium chloride medium, but with an s.d. for pK(*)(w) that was smaller and close to the

  7. Simultaneous optimal estimates of fixed effects and variance components in the mixed model

    Institute of Scientific and Technical Information of China (English)

    WU Mixia; WANG Songgui

    2004-01-01

    For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.

  8. Power generation mixes evaluation applying the mean-variance theory. Analysis of the choices for Japanese energy policy

    International Nuclear Information System (INIS)

    Optimal Japanese power generation mixes in 2030, for both economic efficiency and energy security (less cost variance risk), are evaluated by applying the mean-variance portfolio theory. Technical assumptions, including remaining generation capacity out of the present generation mix, future load duration curve, and Research and Development risks for some renewable energy technologies in 2030, are taken into consideration as either the constraints or parameters for the evaluation. Efficiency frontiers, which consist of the optimal generation mixes for several future scenarios, are identified, taking not only power balance but also capacity balance into account, and are compared with three power generation mixes submitted by the Japanese government as 'the choices for energy and environment'. (author)

  9. Teaching research in variance analysis of F statistic calculation%方差分析中F统计量计算教学研究

    Institute of Scientific and Technical Information of China (English)

    叶鸿烈

    2014-01-01

    In two examples of analysis of variance of specific work, try to answer the two problems often encountered in the teaching of variance, analysis of variance: (1) Can solve the problem of what;(2)How to explain the calculation analysis of variance in F statistics.By comparing the two kinds of teaching methods in the teaching process, the article drawn from the sample data distribution from the angle of the effect of teaching method F statistic calculation and better.%以具体工作中方差分析的两个例子,尝试回答方差教学中经常碰到的两个问题,(1)方差分析能解决什么问题;(2)怎样解释方差分析中F统计量的计算。通过教学过程两种讲授方法的比较,得出从样本数据分布的视角进行F统计量计算的教学方法效果更好。

  10. The variance of the adjusted Rand index.

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence

    2016-06-01

    For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693

  11. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    Science.gov (United States)

    Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa

    2014-07-01

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  12. Electrocardiogram signal variance analysis in the diagnosis of coronary artery disease--a comparison with exercise stress test in an angiographically documented high prevalence population.

    Science.gov (United States)

    Nowak, J; Hagerman, I; Ylén, M; Nyquist, O; Sylvén, C

    1993-09-01

    Variance electrocardiography (variance ECG) is a new resting procedure for detection of coronary artery disease (CAD). The method measures variability in the electrical expression of the depolarization phase induced by this disease. The time-domain analysis is performed on 220 cardiac cycles using high-fidelity ECG signals from 24 leads, and the phase-locked temporal electrical heterogeneity is expressed as a nondimensional CAD index (CAD-I) with the values of 0-150. This study compares the diagnostic efficiency of variance ECG and exercise stress test in a high prevalence population. A total of 199 symptomatic patients evaluated with coronary angiography was subjected to variance ECG and exercise test on a bicycle ergometer as a continuous ramp. The discriminant accuracy of the two methods was assessed employing the receiver operating characteristic curves constructed by successive consideration of several CAD-I cutpoint values and various threshold criteria based on ST-segment depression exclusively or in combination with exertional chest pain. Of these patients, 175 with CAD (> or = 50% luminal stenosis in 1 + major epicardial arteries) presented a mean CAD-I of 88 +/- 22, compared with 70 +/- 21 in 24 nonaffected patients (p or = 70, compared with ST-segment depression > or = 1 mm combined with exertional chest pain, the overall sensitivity of variance ECG was significantly higher (p < 0.01) than that of exercise test (79 vs. 48%). When combined, the two methods identified 93% of coronary angiography positive cases. Variance ECG is an efficient diagnostic method which compares favorably with exercise test for detection of CAD in high prevalence population. PMID:8242912

  13. Inhomogeneity-induced variance of cosmological parameters

    CERN Document Server

    Wiegand, Alexander

    2011-01-01

    Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. So, how can local measurements (at the 100 Mpc scale) be used to determine global cosmological parameters (defined at the 10 Gpc scale)? We use Buchert's averaging formalism and determine a set of locally averaged cosmological parameters in the context of the flat Lambda cold dark matter model. We calculate their ensemble means (i.e. their global values) and variances (i.e. their cosmic variances). We apply our results to typical survey geometries and focus on the study of the effects of local fluctuations of the curvature parameter. By this means we show, that in the linear regime cosmological backreaction and averaging can be reformulated as the issue of cosmic variance. The cosmic variance is found largest for the curvature parameter and discuss some of its consequences. We further propose to use the observed variance of cosmological parameters t...

  14. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.522 Content of request for variance. The agency's request for a variance must include—...

  15. Inhomogeneity-induced variance of cosmological parameters

    Science.gov (United States)

    Wiegand, A.; Schwarz, D. J.

    2012-02-01

    Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.

  16. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean–variance approach

    International Nuclear Information System (INIS)

    Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean–variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums. Using cash flow analysis, Monte Carlo simulations and mean–variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feed-in tariffs systematically require lower direct support levels than feed-in premiums while providing the same attractiveness for investment, because they expose investors to less market risk. These risk implications should be considered when designing policy schemes. - Highlights: • Mean–variance portfolio approach to analyse risk implications of policy instruments. • We show that feed-in tariffs require lower support levels than feed-in premiums. • This systematic effect stems from the lower exposure of investors to market risk. • We created a stochastic model for an exemplary offshore wind park in West Denmark. • We quantify risk-return, Sharpe Ratios and differences in required support levels

  17. A Monte Carlo Study of Seven Homogeneity of Variance Tests

    Directory of Open Access Journals (Sweden)

    Howard B. Lee

    2010-01-01

    Full Text Available Problem statement: The decision by SPSS (now PASW to use the unmodified Levene test to test homogeneity of variance was questioned. It was compared to six other tests. In total, seven homogeneity of variance tests used in Analysis Of Variance (ANOVA were compared on robustness and power using Monte Carlo studies. The homogeneity of variance tests were (1 Levene, (2 modified Levene, (3 Z-variance, (4 Overall-Woodward Modified Z-variance, (5 O’Brien, (6 Samiuddin Cube Root and (7 F-Max. Approach: Each test was subjected to Monte Carlo analysis through different shaped distributions: (1 normal, (2 platykurtic, (3 leptokurtic, (4 moderate skewed and (5 highly skewed. The Levene Test is the one used in all of the latest versions of SPSS. Results: The results from these studies showed that the Levene Test is neither the best nor worst in terms of robustness and power. However, the modified Levene Test showed very good robustness when compared to the other tests but lower power than other tests. The Samiuddin test is at its best in terms of robustness and power when the distribution is normal. The results of this study showed the strengths and weaknesses of the seven tests. Conclusion/Recommendations: No single test outperformed the others in terms of robustness and power. The authors recommend that kurtosis and skewness indices be presented in statistical computer program packages such as SPSS to guide the data analyst in choosing which test would provide the highest robustness and power.

  18. Genomic prediction of breeding values using previously estimated SNP variances

    NARCIS (Netherlands)

    Calus, M.P.L.; Schrooten, C.; Veerkamp, R.F.

    2014-01-01

    Background Genomic prediction requires estimation of variances of effects of single nucleotide polymorphisms (SNPs), which is computationally demanding, and uses these variances for prediction. We have developed models with separate estimation of SNP variances, which can be applied infrequently, and

  19. Beyond the GUM: variance-based sensitivity analysis in metrology

    Science.gov (United States)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  20. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions...

  1. ROBUST ESTIMATION OF VARIANCE COMPONENTS MODEL

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Classical least squares estimation consists of minimizing the sum of the squared residuals of observation. Many authors have produced more robust versions of this estimation by replacing the square by something else, such as the absolute value. These approaches have been generalized, and their robust estimations and influence functions of variance components have been presented. The results may have wide practical and theoretical value.

  2. LOCAL MEDIAN ESTIMATION OF VARIANCE FUNCTION

    Institute of Scientific and Technical Information of China (English)

    杨瑛

    2004-01-01

    This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong convergence rates of the proposed estimators are obtained. Simulation results are given to show the performance of the proposed methods.

  3. Linear transformations of variance/covariance matrices

    NARCIS (Netherlands)

    Parois, P.J.A.; Lutz, M.

    2011-01-01

    Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance

  4. Lorenz Dominance and the Variance of Logarithms.

    OpenAIRE

    Ok, Efe A.; Foster, James

    1997-01-01

    The variance of logarithms is a widely used inequality measure which is well known to disagree with the Lorenz criterion. Up to now, the extent and likelihood of this inconsistency were thought to be vanishingly small. We find that this view is mistaken : the extent of the disgreement can be extremely large; the likelihood is far from negligible.

  5. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

    Science.gov (United States)

    Zilli, Eric A; Yoshida, Motoharu; Tahvildari, Babak; Giocomo, Lisa M; Hasselmo, Michael E

    2009-11-01

    Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol) recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3)/(4pisigma)(2) seconds where mu is the mean period of an oscillator in seconds and sigma(2) its variance in seconds(2). We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum) and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

  6. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2009-11-01

    Full Text Available Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3/(4pisigma(2 seconds where mu is the mean period of an oscillator in seconds and sigma(2 its variance in seconds(2. We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

  7. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  8. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...

  9. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with addit...

  10. Longitudinal analysis of residual feed intake and BW in mink using random regression with heterogeneous residual variance.

    Science.gov (United States)

    Shirali, M; Nielsen, V H; Møller, S H; Jensen, J

    2015-10-01

    The aim of this study was to determine the genetic background of longitudinal residual feed intake (RFI) and BW gain in farmed mink using random regression methods considering heterogeneous residual variances. The individual BW was measured every 3 weeks from 63 to 210 days of age for 2139 male+female pairs of juvenile mink during the growing-furring period. Cumulative feed intake was calculated six times with 3-week intervals based on daily feed consumption between weighing's from 105 to 210 days of age. Genetic parameters for RFI and BW gain in males and females were obtained using univariate random regression with Legendre polynomials containing an animal genetic effect and permanent environmental effect of litter along with heterogeneous residual variances. Heritability estimates for RFI increased with age from 0.18 (0.03, posterior standard deviation (PSD)) at 105 days of age to 0.49 (0.03, PSD) and 0.46 (0.03, PSD) at 210 days of age in male and female mink, respectively. The heritability estimates for BW gain increased with age and had moderate to high range for males (0.33 (0.02, PSD) to 0.84 (0.02, PSD)) and females (0.35 (0.03, PSD) to 0.85 (0.02, PSD)). RFI estimates during the growing period (105 to 126 days of age) showed high positive genetic correlations with the pelting RFI (210 days of age) in male (0.86 to 0.97) and female (0.92 to 0.98). However, phenotypic correlations were lower from 0.47 to 0.76 in males and 0.61 to 0.75 in females. Furthermore, BW records in the growing period (63 to 126 days of age) had moderate (male: 0.39, female: 0.53) to high (male: 0.87, female: 0.94) genetic correlations with pelting BW (210 days of age). The result of current study showed that RFI and BW in mink are highly heritable, especially at the late furring period, suggesting potential for large genetic gains for these traits. The genetic correlations suggested that substantial genetic gain can be obtained by only considering the RFI estimate and BW at pelting

  11. Heritabilities of ego strength (factor C), super ego strength (factor G), and self-sentiment (factor Q3) by multiple abstract variance analysis.

    Science.gov (United States)

    Cattell, R B; Schuerger, J M; Klein, T W

    1982-10-01

    Tested over 3,000 boys (identical and fraternal twins, ordinary sibs, general population) aged 12-18 on Ego Strength, Super Ego Strength, and Self Sentiment. The Multiple Abstract Variance Analysis (MAVA) method was used to obtain estimates of abstract (hereditary, environmental) variances and covariances that contribute to total variation in the three traits. Within-family heritabilities for these traits were about .30, .05, and .65. Between-family heritabilities were .60, .08, and .45. Within-family correlations of genetic and environmental deviations were trivial, unusually so among personality variables, but between-family values showed the usual high negative values, consistent with the law of coercion to the biosocial mean.

  12. Measurement and modeling of acid dissociation constants of tri-peptides containing Glu, Gly, and His using potentiometry and generalized multiplicative analysis of variance.

    Science.gov (United States)

    Khoury, Rima Raffoul; Sutton, Gordon J; Hibbert, D Brynn; Ebrahimi, Diako

    2013-02-28

    We report pK(a) values with measurement uncertainties for all labile protons of the 27 tri-peptides prepared from the amino acids glutamic acid (E), glycine (G) and histidine (H). Each tri-peptide (GGG, GGE, GGH, …, HHH) was subjected to alkali titration and pK(a) values were calculated from triplicate potentiometric titrations data using HyperQuad 2008 software. A generalized multiplicative analysis of variance (GEMANOVA) of pK(a) values for the most acidic proton gave the optimum model having two terms, an interaction between the end amino acids plus an isolated main effect of the central amino acid.

  13. The Theory of Variances in Equilibrium Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  14. The Theory of Variances in Equilibrium Reconstruction

    International Nuclear Information System (INIS)

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature

  15. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  16. Fractional constant elasticity of variance model

    OpenAIRE

    Ngai Hang Chan; Chi Tim Ng

    2007-01-01

    This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.

  17. 42 CFR 456.525 - Request for renewal of variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...

  18. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    Science.gov (United States)

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  19. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    Science.gov (United States)

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906

  20. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  1. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    Science.gov (United States)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  2. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    Science.gov (United States)

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations. PMID:24738761

  3. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    Science.gov (United States)

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations.

  4. Advanced methods of analysis variance on scenarios of nuclear prospective; Metodos avanzados de analisis de varianza en escenarios de prospectiva nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-07-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  5. Age and Gender Differences Associated with Family Communication and Materialism among Young Urban Adult Consumers in Malaysia: A One-Way Analysis of Variance (ANOVA

    Directory of Open Access Journals (Sweden)

    Eric V. Bindah

    2012-11-01

    Full Text Available The main purpose of this study is to examine the differences in age and gender among the various types of family communication patterns that takes place at home among young adult consumers. It is also an attempt to examine if there are differences in age and gender on the development of materialistic values in Malaysia. This paper briefly conceptualizes the family communication processes based on existing literature to illustrate the association between family communication patterns and materialism. This study takes place in Malaysia, a country in the Southeast Asia embracing a multi-ethnic and multi-cultural society. Preliminary statistical procedures were employed to examine possible significant group differences in family communication and materialism based on various age group and gender among Malaysian consumers. A one-way analysis of variance was utilised to determine the significant differences in terms of age and gender with respect to their responses on the various measures. When there were significant differences, Post Hoc Tests (Scheffe were used to determine the particular groups which differed significantly within a significant overall one-way analysis of variance. The implications, significance and limitations of the study are discussed as a concluding remark.

  6. An alternative method for noise analysis using pixel variance as part of quality control procedures on digital mammography systems.

    NARCIS (Netherlands)

    Bouwman, R.; Young, K.; Lazzari, B.; Ravaglia, V.; Broeders, M.J.M.; Engen, R. van

    2009-01-01

    According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does n

  7. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    Science.gov (United States)

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level

  8. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  9. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available...... variance. Our findings suggest that the empirical path of quadratic variation is also estimated better with the realized range-based variance....

  10. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

    2009-08-15

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  11. Technical note: An improved estimate of uncertainty for source contribution from effective variance Chemical Mass Balance (EV-CMB) analysis

    Science.gov (United States)

    Shi, Guo-Liang; Zhou, Xiao-Yu; Feng, Yin-Chang; Tian, Ying-Ze; Liu, Gui-Rong; Zheng, Mei; Zhou, Yang; Zhang, Yuan-Hang

    2015-01-01

    The CMB (Chemical Mass Balance) 8.2 model released by the USEPA is a commonly used receptor model that can determine estimated source contributions and their uncertainties (called default uncertainty). In this study, we propose an improved CMB uncertainty for the modeled contributions (called EV-LS uncertainty) by adding the difference between the modeled and measured values for ambient species concentrations to the default CMB uncertainty, based on the effective variance least squares (EV-LS) solution. This correction reconciles the uncertainty estimates for EV and OLS regression. To verify the formula for the EV-LS CMB uncertainty, the same ambient datasets were analyzed using the equation we developed for EV-LS CMB uncertainty and a standard statistical package, SPSS 16.0. The same results were obtained by both ways indicate that the equation for EV-LS CMB uncertainty proposed here is acceptable. In addition, four ambient datasets were studies by CMB 8.2 and the source contributions as well as the associated uncertainties were obtained accordingly.

  12. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...

  13. 40 CFR 142.43 - Disposition of a variance request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Disposition of a variance request. 142... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.43 Disposition of a variance request. (a) If...

  14. 40 CFR 142.42 - Consideration of a variance request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Consideration of a variance request... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.42 Consideration of a variance request. (a)...

  15. 31 CFR 8.59 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...

  16. A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del

    Institute of Scientific and Technical Information of China (English)

    Hou Ying-li; Liu Guo-xin; Jiang Chun-lan

    2015-01-01

    In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the efficient strategies and efficient frontier) are derived explicitly.

  17. 31 CFR 10.67 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...

  18. The Effect of Selection on the Phenotypic Variance

    OpenAIRE

    Shnol, E.E.; Kondrashov, A S

    1993-01-01

    We consider the within-generation changes of phenotypic variance caused by selection w(x) which acts on a quantitative trait x. If before selection the trait has Gaussian distribution, its variance decreases if the second derivative of the logarithm of w(x) is negative for all x, while if it is positive for all x, the variance increases.

  19. Semiparametric bounds of mean and variance for exotic options

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Finding semiparametric bounds for option prices is a widely studied pricing technique.We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic(Collar and Gap) call options given mean and variance information on the underlying asset price.Mathematically,we extended domination technique by quadratic functions to bound mean and variances.

  20. Semiparametric bounds of mean and variance for exotic options

    Institute of Scientific and Technical Information of China (English)

    LIU GuoQing; LI V.Wenbo

    2009-01-01

    Finding semiparametric bounds for option prices is a widely studied pricing technique. We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic (Collar and Gap) call options given mean and variance information on the underlying asset price. Mathematically, we extended domination technique by quadratic functions to bound mean and variances.

  1. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  2. Empirical Performance of the Constant Elasticity Variance Option Pricing Model

    OpenAIRE

    Ren-Raw Chen; Cheng-Few Lee; Han-Hsing Lee

    2009-01-01

    In this essay, we empirically test the Constant–Elasticity-of-Variance (CEV) option pricing model by Cox (1975, 1996) and Cox and Ross (1976), and compare the performances of the CEV and alternative option pricing models, mainly the stochastic volatility model, in terms of European option pricing and cost-accuracy based analysis of their numerical procedures.In European-style option pricing, we have tested the empirical pricing performance of the CEV model and compared the results with those ...

  3. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    International Nuclear Information System (INIS)

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered

  4. A randomization-based perspective of analysis of variance: a test statistic robust to treatment effect heterogeneity

    OpenAIRE

    Ding, Peng; Dasgupta, Tirthankar

    2016-01-01

    Fisher randomization tests for Neyman's null hypothesis of no average treatment effects are considered in a finite population setting associated with completely randomized experiments with more than two treatments. The consequences of using the F statistic to conduct such a test are examined both theoretically and computationally, and it is argued that under treatment effect heterogeneity, use of the F statistic can severely inflate the type I error of the Fisher randomization test. An altern...

  5. Dimension reduction in heterogeneous neural networks: Generalized Polynomial Chaos (gPC) and ANalysis-Of-VAriance (ANOVA)

    Science.gov (United States)

    Choi, M.; Bertalan, T.; Laing, C. R.; Kevrekidis, I. G.

    2016-09-01

    We propose, and illustrate via a neural network example, two different approaches to coarse-graining large heterogeneous networks. Both approaches are inspired from, and use tools developed in, methods for uncertainty quantification (UQ) in systems with multiple uncertain parameters - in our case, the parameters are heterogeneously distributed on the network nodes. The approach shows promise in accelerating large scale network simulations as well as coarse-grained fixed point, periodic solution computation and stability analysis. We also demonstrate that the approach can successfully deal with structural as well as intrinsic heterogeneities.

  6. Optimization of radio astronomical observations using Allan variance measurements

    CERN Document Server

    Schieder, R

    2001-01-01

    Stability tests based on the Allan variance method have become a standard procedure for the evaluation of the quality of radio-astronomical instrumentation. They are very simple and simulate the situation when detecting weak signals buried in large noise fluctuations. For the special conditions during observations an outline of the basic properties of the Allan variance is given, and some guidelines how to interpret the results of the measurements are presented. Based on a rather simple mathematical treatment clear rules for observations in ``Position-Switch'', ``Beam-'' or ``Frequency-Switch'', ``On-The-Fly-'' and ``Raster-Mapping'' mode are derived. Also, a simple ``rule of the thumb'' for an estimate of the optimum timing for the observations is found. The analysis leads to a conclusive strategy how to plan radio-astronomical observations. Particularly for air- and space-borne observatories it is very important to determine, how the extremely precious observing time can be used with maximum efficiency. The...

  7. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  8. Multivariate Analysis of Variance: Finding significant growth in mice with craniofacial dysmorphology caused by the Crouzon mutation

    OpenAIRE

    Thorup, Signe Strann; Ólafsdóttir, Hildur; Darvann, Tron Andre; Hermann, Nuno V.; Larsen, Per; Paulsen, Rasmus Reinhold; Perlyn, Chad A.; Morriss-Kay, Gillian M.; Kreiborg, Sven; Larsen, Rasmus

    2010-01-01

    Crouzon syndrome is characterized by growth disturbances caused by premature fusion of the cranial growth zones. A mouse model with mutation Fgfr2C342Y, equivalent to the most common Crouzon syndrome mutation (henceforth called the Crouzon mouse model), has a phenotype showing many parallels to the human counterpart. Quantifying growth in the Crouzon mouse model could test hypotheses of the relationship between craniosynostosis and dysmorphology, leading to better understanding of the causes ...

  9. Multivariate Analysis of Variance: Finding significant growth in mice with craniofacial dysmorphology caused by the Crouzon mutation

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Ólafsdóttir, Hildur; Darvann, Tron Andre;

    2010-01-01

    Crouzon syndrome is characterized by growth disturbances caused by premature fusion of the cranial growth zones. A mouse model with mutation Fgfr2C342Y, equivalent to the most common Crouzon syndrome mutation (henceforth called the Crouzon mouse model), has a phenotype showing many parallels...... to the human counterpart. Quantifying growth in the Crouzon mouse model could test hypotheses of the relationship between craniosynostosis and dysmorphology, leading to better understanding of the causes of Crouzon syndrome as well as providing knowledge relevant for surgery planning. In the present study we...

  10. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke J.; Obermaier, Harald; Joy, Kenneth

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  11. Genetic variance of tolerance and the toxicant threshold model.

    Science.gov (United States)

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.

  12. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  13. Modeling variance structure of body shape traits of Lipizzan horses.

    Science.gov (United States)

    Kaps, M; Curik, I; Baban, M

    2010-09-01

    Heterogeneity of variance of growth traits over age is a common issue in estimating genetic parameters and is addressed in this study by selecting appropriate variance structure models for additive genetic and environmental variances. Modeling and partitioning those variances connected with analyzing small data sets were demonstrated on Lipizzan horses. The following traits were analyzed: withers height, chest girth, and cannon bone circumference. The measurements were taken at birth, and at approximately 6, 12, 24, and 36 mo of age of 660 Lipizzan horses born in Croatia between 1948 and 2000. The corresponding pedigree file consisted of 1,458 horses. Sex, age of dam, and stud-year-season interaction were considered fixed effects; additive genetic and permanent environment effects were defined as random. Linear adjustments of age at measuring were done within measuring groups. Maternal effects were included only for measurements taken at birth and at 6 mo. Additive genetic variance structures were modeled by using uniform structures or structures based on polynomial random regression. Environmental variance structures were modeled by using one of the following models: unstructured, exponential, Gaussian, or combinations of identity or diagonal with structures based on polynomial random regression. The parameters were estimated by using REML. Comparison and fits of the models were assessed by using Akaike and Bayesian information criteria, and by checking graphically the adequacy of the shape of the overall (phenotypic) and component (additive genetic and environmental) variance functions. The best overall fit was obtained from models with unstructured error variance. Compared with the model with uniform additive genetic variance, models with structures based on random regression only slightly improved overall fit. Exponential and Gaussian models were generally not suitable because they do not accommodate adequately heterogeneity of variance. Using the unstructured

  14. Genetically controlled environmental variance for sternopleural bristles in Drosophila melanogaster - an experimental test of a heterogeneous variance model

    DEFF Research Database (Denmark)

    Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker;

    2007-01-01

    The objective of this study was to test the hypothesis that the environmental variance of sternopleural bristle number in Drosophila melanogaster is partly under genetic control. We used data from 20 inbred lines and 10 control lines to test this hypothesis. Two models were used: a standard...... quantitative genetics model based on the infinitesimal model, and an extension of this model. In the extended model it is assumed that each individual has its own environmental variance and that this heterogeneity of variance has a genetic component. The heterogeneous variance model was favoured by the data......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...

  15. Fractal Fluctuations and Quantum-Like Chaos in the Brain by Analysis of Variability of Brain Waves: A New Method Based on a Fractal Variance Function and Random Matrix Theory

    CERN Document Server

    Conte, E; Federici, A; Zbilut, J P

    2007-01-01

    We developed a new method for analysis of fundamental brain waves as recorded by EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given.

  16. Research on variance of subnets in network sampling

    Institute of Scientific and Technical Information of China (English)

    Qi Gao; Xiaoting Li; Feng Pan

    2014-01-01

    In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.

  17. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  18. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  19. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  20. Weighted estimator with new weight values and analysis of its variance%新权值的权估计及其方差分析

    Institute of Scientific and Technical Information of China (English)

    尹留志; 余瑶

    2008-01-01

    运用Zhang处理方差的技巧来研究均值的估计问题,对加权平均的均值, 尤其是解决测量样本量不同的样本问题,提出了一个新的具有无偏性的估计, 并且通过模拟说明, 在多数情况下,特别是当样本量很大时, 这个新的估计具有较小的偏差和均方误差. 然后,针对这个均值估计的方差, 又给出了一个合适的估计,而且说明其稳健性. 最后将此估计与bootstrap方法所得到的结果进行比较.%The technique Zhang used to estimate variance was used to research the estimation of mean values. In particular, when the sample size was different, a new unbiased estimator of the weighted mean was proposed. In most cases, especially where the sample size was large, it was shown that the new estimator has smaller biases and smaller mean square errors. Then a proper estimator of the variance of this weighted mean was given, which was robust. Finally the estimator of variance of the new modified weighed mean was compared with the results by bootstrap.

  1. Confidence Intervals of Variance Functions in Generalized Linear Model

    Institute of Scientific and Technical Information of China (English)

    Yong Zhou; Dao-ji Li

    2006-01-01

    In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.

  2. TWO-VARIANCE REGRESSION ANALYSIS METHOD%双方差回归分析方法

    Institute of Scientific and Technical Information of China (English)

    傅惠民; 吴琼

    2011-01-01

    提出双方差回归模型,建立双方差回归分析方法,给出其回归方程和高置信水平、高可靠度的置信限曲线.双方差回归模型包含完全相关随机变量和相互独立随机变量,前者可用相关方差表征,后者则需用独立方差描述.传统回归分析主要适用于处理在一条曲线上随机波动的数据,而文中方法则可处理在多条不同曲线上随机波动的数据.在性能曲线测试中,文中方法与成组试验法相比,具有信息量大、精度高,所需试样少的特点.%The two-variance regression model is put forward, on the basis of which the two-variance regression analysis method is established. Then the regression equation and confidence limit curves with high confidence level and high reliability are also given. The two-variance regression model involves totally correlated random variable and independent random variable which can be represented by correlated variance and independent variance, respectively. The presented method extends the regression analysis, which is mainly suitable to test data fluctuating around only one curve, to the test data fluctuating around several different curves that is very common in engineering. Compared with the group test method, the presented method not only has higher precision but also solves the problem of reliability assessment with very small sample.

  3. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  4. Cost-variance analysis by DRGs; a technique for clinical budget analysis.

    Science.gov (United States)

    Voss, G B; Limpens, P G; Brans-Brabant, L J; van Ooij, A

    1997-02-01

    In this article it is shown how a cost accounting system based on DRGs can be valuable in determining changes in clinical practice and explaining alterations in expenditure patterns from one period to another. A cost-variance analysis is performed using data from the orthopedic department from the fiscal years 1993 and 1994. Differences between predicted and observed cost for medical care, such as diagnostic procedures, therapeutic procedures and nursing care are analyzed into different components: changes in patient volume, case-mix differences, changes in resource use and variations in cost per procedure. Using a DRG cost accounting system proved to be a useful technique for clinical budget analysis. Results may stimulate discussions between hospital managers and medical professionals to explain cost variations integrating medical and economic aspects of clinical health care. PMID:10165044

  5. Combining analysis of variance and three‐way factor analysis methods for studying additive and multiplicative effects in sensory panel data

    DEFF Research Database (Denmark)

    Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun

    2015-01-01

    Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...

  6. MULTILEVEL MODELING OF THE PERFORMANCE VARIANCE

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  7. Variance of indoor radon concentration: Major influencing factors.

    Science.gov (United States)

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.

  8. Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model

    OpenAIRE

    Leunglung Chan; Eckhard Platen

    2015-01-01

    This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.

  9. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  10. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    Science.gov (United States)

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  11. Determing the frame of minimum Hubble expansion variance

    CERN Document Server

    McKay, James H

    2015-01-01

    We characterize a cosmic rest frame in which the variation of the spherically averaged Hubble expansion is most uniform, under local Lorentz boosts of the central observer. Using the COMPOSITE sample of 4534 galaxies, we identify a degenerate set of candidate minimum variance frames, which includes the rest frame of the Local Group (LG) of galaxies, but excludes the standard Cosmic Microwave Background (CMB) frame. Candidate rest frames defined by a boost from the LG frame close to the plane of the galaxy have a statistical likelihood similar to the LG frame. This may result from a lack of constraining data in the Zone of Avoidance in the COMPOSITE sample. We extend our analysis to the Cosmicflows-2 (CF2) sample of 8,162 galaxies. While the signature of a systematic boost offset between the CMB and LG frames averages is still detected, the spherically averaged expansion variance in all rest frames is significantly larger in the CF2 sample than would be reasonably expected. We trace this to an omission of any ...

  12. Allan Variance Analysis as Useful Tool to Determine Noise in Various Single-Molecule Setups

    CERN Document Server

    Czerwinski, Fabian; Selhuber-Unkel, Christine; Oddershede, Lene B; 10.1117/12.827975

    2009-01-01

    One limitation on the performance of optical traps is the noise inherently present in every setup. Therefore, it is the desire of most experimentalists to minimize and possibly eliminate noise from their optical trapping experiments. A step in this direction is to quantify the actual noise in the system and to evaluate how much each particular component contributes to the overall noise. For this purpose we present Allan variance analysis as a straightforward method. In particular, it allows for judging the impact of drift which gives rise to low-frequency noise, which is extremely difficult to pinpoint by other methods. We show how to determine the optimal sampling time for calibration, the optimal number of data points for a desired experiment, and we provide measurements of how much accuracy is gained by acquiring additional data points. Allan variances of both micrometer-sized spheres and asymmetric nanometer-sized rods are considered.

  13. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available......, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We solve this...... problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared with realized...

  14. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is...... available, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of the realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We...... solve this problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared...

  15. A new definition of nonlinear statistics mean and variance

    OpenAIRE

    Chen, W.,

    1999-01-01

    This note presents a new definition of nonlinear statistics mean and variance to simplify the nonlinear statistics computations. These concepts aim to provide a theoretical explanation of a novel nonlinear weighted residual methodology presented recently by the present author.

  16. Occupancy, spatial variance, and the abundance of species

    OpenAIRE

    He, F.; Gaston, K J

    2003-01-01

    A notable and consistent ecological observation known for a long time is that spatial variance in the abundance of a species increases with its mean abundance and that this relationship typically conforms well to a simple power law (Taylor 1961). Indeed, such models can be used at a spectrum of spatial scales to describe spatial variance in the abundance of a single species at different times or in different regions and of different species across the same set of areas (Tayl...

  17. A characterization of Poisson-Gaussian families by generalized variance

    OpenAIRE

    Kokonendji, Célestin C.; Masmoudi, Afif

    2006-01-01

    We show that if the generalized variance of an infinitely divisible natural exponential family [math] in a [math] -dimensional linear space is of the form [math] , then there exists [math] in [math] such that [math] is a product of [math] univariate Poisson and ( [math] )-variate Gaussian families. In proving this fact, we use a suitable representation of the generalized variance as a Laplace transform and the result, due to Jörgens, Calabi and Pogorelov, that any strictly convex smooth funct...

  18. Analytic variance estimates of Swank and Fano factors

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-07-15

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.

  19. Image embedded coding with edge preservation based on local variance analysis for mobile applications

    Science.gov (United States)

    Luo, Gaoyong; Osypiw, David

    2006-02-01

    Transmitting digital images via mobile device is often subject to bandwidth which are incompatible with high data rates. Embedded coding for progressive image transmission has recently gained popularity in image compression community. However, current progressive wavelet-based image coders tend to send information on the lowest-frequency wavelet coefficients first. At very low bit rates, images compressed are therefore dominated by low frequency information, where high frequency components belonging to edges are lost leading to blurring the signal features. This paper presents a new image coder employing edge preservation based on local variance analysis to improve the visual appearance and recognizability of compressed images. The analysis and compression is performed by dividing an image into blocks. Fast lifting wavelet transform is developed with the advantages of being computationally efficient and boundary effects minimized by changing wavelet shape for handling filtering near the boundaries. A modified SPIHT algorithm with more bits used to encode the wavelet coefficients and transmitting fewer bits in the sorting pass for performance improvement, is implemented to reduce the correlation of the coefficients at scalable bit rates. Local variance estimation and edge strength measurement can effectively determine the best bit allocation for each block to preserve the local features by assigning more bits for blocks containing more edges with higher variance and edge strength. Experimental results demonstrate that the method performs well both visually and in terms of MSE and PSNR. The proposed image coder provides a potential solution with parallel computation and less memory requirements for mobile applications.

  20. Wild bootstrap of the mean in the infinite variance case

    OpenAIRE

    Giuseppe Cavaliere; Iliyan Georgiev; Robert Taylor, A. M.

    2011-01-01

    It is well known that the standard i.i.d. bootstrap of the mean is inconsistent in a location model with infinite variance (alfa-stable) innovations. This occurs because the bootstrap distribution of a normalised sum of infinite variance random variables tends to a random distribution. Consistent bootstrap algorithms based on subsampling methods have been proposed but have the drawback that they deliver much wider confidence sets than those generated by the i.i.d. bootstrap owing to the fact ...

  1. Variance Analysis and Comparison in Computer-Aided Design

    Science.gov (United States)

    Ullrich, T.; Schiffer, T.; Schinko, C.; Fellner, D. W.

    2011-09-01

    The need to analyze and visualize differences of very similar objects arises in many research areas: mesh compression, scan alignment, nominal/actual value comparison, quality management, and surface reconstruction to name a few. In computer graphics, for example, differences of surfaces are used for analyzing mesh processing algorithms such as mesh compression. They are also used to validate reconstruction and fitting results of laser scanned surfaces. As laser scanning has become very important for the acquisition and preservation of artifacts, scanned representations are used for documentation as well as analysis of ancient objects. Detailed mesh comparisons can reveal smallest changes and damages. These analysis and documentation tasks are needed not only in the context of cultural heritage but also in engineering and manufacturing. Differences of surfaces are analyzed to check the quality of productions. Our contribution to this problem is a workflow, which compares a reference / nominal surface with an actual, laser-scanned data set. The reference surface is a procedural model whose accuracy and systematics describe the semantic properties of an object; whereas the laser-scanned object is a real-world data set without any additional semantic information.

  2. Data Warehouse Designs Achieving ROI with Market Basket Analysis and Time Variance

    CERN Document Server

    Silvers, Fon

    2011-01-01

    Market Basket Analysis (MBA) provides the ability to continually monitor the affinities of a business and can help an organization achieve a key competitive advantage. Time Variant data enables data warehouses to directly associate events in the past with the participants in each individual event. In the past however, the use of these powerful tools in tandem led to performance degradation and resulted in unactionable and even damaging information. Data Warehouse Designs: Achieving ROI with Market Basket Analysis and Time Variance presents an innovative, soup-to-nuts approach that successfully

  3. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    Science.gov (United States)

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level, (2) stopping trypsin digestion with acid, and (3) the trypsin/protein ratio. This provides guidelines for the

  4. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    Science.gov (United States)

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering. PMID:27093626

  5. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    Science.gov (United States)

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

  6. Is the ANOVA F-Test Robust to Variance Heterogeneity When Sample Sizes are Equal?: An Investigation via a Coefficient of Variation

    Science.gov (United States)

    Rogan, Joanne C.; Keselman, H. J.

    1977-01-01

    The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)

  7. Variance and efficiency of contribution Monte Carlo

    International Nuclear Information System (INIS)

    The game of contribution is compared with the game of splitting in radiation transport using numerical results obtained by solving the set of coupled integral equations for first and second moments around the score. The splitting game is found superior. (author)

  8. Time Variance of the Suspension Nonlinearity

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.; Pedersen, Bo Rohde

    2008-01-01

    . This paper investigates the changes in compliance the driving signal can cause, this includes low level short duration measurements of the resonance frequency as well as high power long duration measurements of the non-linearity of the suspension. It is found that at low levels the suspension softens...

  9. Estimating the Variance of Design Parameters

    Science.gov (United States)

    Hedberg, E. C.; Hedges, L. V.; Kuyper, A. M.

    2015-01-01

    Randomized experiments are generally considered to provide the strongest basis for causal inferences about cause and effect. Consequently randomized field trials have been increasingly used to evaluate the effects of education interventions, products, and services. Populations of interest in education are often hierarchically structured (such as…

  10. Partitioning of genomic variance using biological pathways

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    basis of SNP-data and trait phenotypes and can account for a much larger fraction of the heritable component. A disadvantage is that this “black-box” modelling approach conceals the biological mechanisms underlying the trait. We propose to open the “black-box” by building SNP-set genomic models that...

  11. Assessment of the genetic variance of late-onset Alzheimer's disease.

    Science.gov (United States)

    Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K

    2016-05-01

    Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions. PMID:27036079

  12. The positioning algorithm based on feature variance of billet character

    Science.gov (United States)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  13. Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise

    Institute of Scientific and Technical Information of China (English)

    Donghui Li; Li Guo

    2006-01-01

    @@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.

  14. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  15. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  16. Calculation of Scale of Fluctuation and Variance Reduction Function

    Institute of Scientific and Technical Information of China (English)

    Yan Shuwang; Guo Linping

    2015-01-01

    The scale of fluctuation is one of the vital parameters for the application of random field theory to the reli-ability analysis of geotechnical engineering. In the present study, the fluctuation function method and weighted curve fitting method were presented to make the calculation more simple and accurate. The vertical scales of fluctuation of typical layers of Tianjin Port were calculated based on a number of engineering geotechnical investigation data, which can be guidance to other projects in this area. Meanwhile, the influences of sample interval and type of soil index on the scale of fluctuation were analyzed, according to which, the principle of determining the scale of fluctuation when the sample interval changes was defined. It can be obtained that the scale of fluctuation is the basic attribute reflecting spatial variability of soil, therefore, the scales of fluctuation calculated according to different soil indexes should be basically the same. The non-correlation distance method was improved, and the principle of determining the variance reduction function was also discussed.

  17. An Efficient SDN Load Balancing Scheme Based on Variance Analysis for Massive Mobile Users

    Directory of Open Access Journals (Sweden)

    Hong Zhong

    2015-01-01

    Full Text Available In a traditional network, server load balancing is used to satisfy the demand for high data volumes. The technique requires large capital investment while offering poor scalability and flexibility, which difficultly supports highly dynamic workload demands from massive mobile users. To solve these problems, this paper analyses the principle of software-defined networking (SDN and presents a new probabilistic method of load balancing based on variance analysis. The method can be used to dynamically manage traffic flows for supporting massive mobile users in SDN networks. The paper proposes a solution using the OpenFlow virtual switching technology instead of the traditional hardware switching technology. A SDN controller monitors data traffic of each port by means of variance analysis and provides a probability-based selection algorithm to redirect traffic dynamically with the OpenFlow technology. Compared with the existing load balancing methods which were designed to support traditional networks, this solution has lower cost, higher reliability, and greater scalability which satisfy the needs of mobile users.

  18. Unbiased Estimates of Variance Components with Bootstrap Procedures

    Science.gov (United States)

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  19. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...

  20. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    Science.gov (United States)

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic covariances estimated from the model accounting for heterogeneous variances resulted in genetic

  1. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    Science.gov (United States)

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic

  2. Variance in the reproductive success of dominant male mountain gorillas.

    Science.gov (United States)

    Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M

    2014-10-01

    Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species. PMID:24818867

  3. Predicting Risk Sensitivity in Humans and Lower Animals: Risk as Variance or Coefficient of Variation

    Science.gov (United States)

    Weber, Elke U.; Shafir, Sharoni; Blais, Ann-Renee

    2004-01-01

    This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk…

  4. Stream sampling for variance-optimal estimation of subset sums

    OpenAIRE

    Cohen, Edith; Duffield, Nick; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present an efficient reservoir sampling scheme, $\\varoptk$, that dominates all previous schemes in terms of estimation quality. $\\varoptk$ provides {\\em variance optimal unbiased estimation of subset sum...

  5. Recombining binomial tree for constant elasticity of variance process

    OpenAIRE

    Hi Jun Choe; Jeong Ho Chu; So Jeong Shin

    2014-01-01

    The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...

  6. Explaining the Prevalence, Scaling and Variance of Urban Phenomena

    CERN Document Server

    Gomez-Lievano, Andres; Hausmann, Ricardo

    2016-01-01

    The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.

  7. Identifiability, stratification and minimum variance estimation of causal effects.

    Science.gov (United States)

    Tong, Xingwei; Zheng, Zhongguo; Geng, Zhi

    2005-10-15

    The weakest sufficient condition for the identifiability of causal effects is the weakly ignorable treatment assignment, which implies that potential responses are independent of treatment assignment in each fine subpopulation stratified by a covariate. In this paper, we expand the independence that holds in fine subpopulations to the case that the independence may also hold in several coarse subpopulations, each of which consists of several fine subpopulations and may have overlaps with other coarse subpopulations. We first show that the identifiability of causal effects occurs if and only if the coarse subpopulations partition the whole population. We then propose a principle, called minimum variance principle, which says that the estimator possessing the minimum variance is preferred, in dealing with the stratification and the estimation of the causal effects. The simulation results with the detail programming and a practical example demonstrate that it is a feasible and reasonable way to achieve our goals. PMID:16149123

  8. Variance computations for functional of absolute risk estimates

    OpenAIRE

    Pfeiffer, R. M.; E. Petracci

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function base...

  9. Convergence of Recursive Identification for ARMAX Process with Increasing Variances

    Institute of Scientific and Technical Information of China (English)

    JIN Ya; LUO Guiming

    2007-01-01

    The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.

  10. The return of the variance: intraspecific variability in community ecology.

    Science.gov (United States)

    Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie

    2012-04-01

    Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.

  11. Constraining the local variance of H0 from directional analyses

    Science.gov (United States)

    Bengaly, C. A. P., Jr.

    2016-04-01

    We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.

  12. Avoiding Aliasing in Allan Variance: an Application to Fiber Link Data Analysis

    CERN Document Server

    Calosso, Claudio E; Micalizio, Salvatore

    2015-01-01

    Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284 km coherent optical link for frequency dissemination, which we realized in Italy.

  13. Avoiding Aliasing in Allan Variance: An Application to Fiber Link Data Analysis.

    Science.gov (United States)

    Calosso, Claudio E; Clivati, Cecilia; Micalizio, Salvatore

    2016-04-01

    Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance (AVAR) as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284-km coherent optical link for frequency dissemination, which we realized in Italy. PMID:26800534

  14. Variance optimal sampling based estimation of subset sums

    CERN Document Server

    Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

  15. 29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Variance of the bond/escrow and sale-contract requirements... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the Statutory Requirements § 4204.11 Variance of the bond/escrow and sale-contract requirements. (a)...

  16. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  17. 40 CFR 142.21 - State consideration of a variance or exemption request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State consideration of a variance or... State-Issued Variances and Exemptions § 142.21 State consideration of a variance or exemption request. A State with primary enforcement responsibility shall act on any variance or exemption request...

  18. Variance Risk Premia

    OpenAIRE

    Peter Carr; Liuren Wu

    2004-01-01

    We propose a direct and robust method for quantifying the variance risk premium on financial assets. We theoretically and numerically show that the risk-neutral expected value of the return variance, also known as the variance swap rate, is well approximated by the value of a particular portfolio of options. Ignoring the small approximation error, the difference between the realized variance and this synthetic variance swap rate quantifies the variance risk premium. Using a large options data...

  19. Identification and quantification of peptides and proteins secreted from prostate epithelial cells by unbiased liquid chromatography tandem mass spectrometry using goodness of fit and analysis of variance.

    Science.gov (United States)

    Florentinus, Angelica K; Bowden, Peter; Sardana, Girish; Diamandis, Eleftherios P; Marshall, John G

    2012-02-01

    The proteins secreted by prostate cancer cells (PC3(AR)6) were separated by strong anion exchange chromatography, digested with trypsin and analyzed by unbiased liquid chromatography tandem mass spectrometry with an ion trap. The spectra were matched to peptides within proteins using a goodness of fit algorithm that showed a low false positive rate. The parent ions for MS/MS were randomly and independently sampled from a log-normal population and therefore could be analyzed by ANOVA. Normal distribution analysis confirmed that the parent and fragment ion intensity distributions were sampled over 99.9% of their range that was above the background noise. Arranging the ion intensity data with the identified peptide and protein sequences in structured query language (SQL) permitted the quantification of ion intensity across treatments, proteins and peptides. The intensity of 101,905 fragment ions from 1421 peptide precursors of 583 peptides from 233 proteins separated over 11 sample treatments were computed together in one ANOVA model using the statistical analysis system (SAS) prior to Tukey-Kramer honestly significant difference (HSD) testing. Thus complex mixtures of proteins were identified and quantified with a high degree of confidence using an ion trap without isotopic labels, multivariate analysis or comparing chromatographic retention times.

  20. A generalization of Talagrand's variance bound in terms of influences

    CERN Document Server

    Kiss, Demeter

    2010-01-01

    Consider a random variable of the form f(X_1,...,X_n), where f is a deterministic function, and where X_1,...,X_n are i.i.d random variables. For the case where X_1 has a Bernoulli distribution, Talagrand (1994) gave an upper bound for the variance of f in terms of the individual influences of the variables X_i for i=1,...,n. We generalize this result to the case where X_1 takes finitely many vales.

  1. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D;

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...

  2. Reduction of variance in measurements of average metabolite concentration in anatomically-defined brain regions

    Science.gov (United States)

    Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki

    2016-11-01

    Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.

  3. Hydraulic geometry of river cross sections; theory of minimum variance

    Science.gov (United States)

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  4. Influence of genetic variance on sodium sensitivity of blood pressure.

    Science.gov (United States)

    Luft, F C; Miller, J Z; Weinberger, M H; Grim, C E; Daugherty, S A; Christian, J C

    1987-02-01

    To examine the effect of genetic variance on blood pressure, sodium homeostasis, and its regulatory determinants, we studied 37 pairs of monozygotic twins and 18 pairs of dizygotic twins under conditions of volume expansion and contraction. We found that, in addition to blood pressure and body size, sodium excretion in response to provocative maneuvers, glomerular filtration rate, the renin-angiotensin system, and the sympathetic nervous system are influenced by genetic variance. To elucidate the interaction of genetic factors and an environmental influence, namely, salt intake, we restricted dietary sodium in 44 families of twin children. In addition to a modest decrease in blood pressure, we found heterogeneous responses in blood pressure indicative of sodium sensitivity and resistance which were normally distributed. Strong parent-offspring resemblances were found in baseline blood pressures which persisted when adjustments were made for age and weight. Further, mother-offspring resemblances were observed in the change in blood pressure with sodium restriction. We conclude that the control of sodium homeostasis is heritable and that the change in blood pressure with sodium restriction is familial as well. These data speak to the interaction between the genetic susceptibility to hypertension and environmental influences which may result in its expression. PMID:3553721

  5. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...... of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...

  6. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  7. Budgeting and controllable cost variances. The case of multiple diagnoses, multiple services, and multiple resources.

    Science.gov (United States)

    Broyles, R W; Lay, C M

    1982-12-01

    This paper examines an unfavorable cost variance in an institution which employs multiple resources to provide stay specific and ancillary services to patients presenting multiple diagnoses. It partitions the difference between actual and expected costs into components that are the responsibility of an identifiable individual or group of individuals. The analysis demonstrates that the components comprising an unfavorable cost variance are attributable to factor prices, the use of real resources, the mix of patients, and the composition of care provided by the institution. In addition, the interactive effects of these factors are also identified. PMID:7183731

  8. Employing components-of-variance to evaluate forensic breath test instruments.

    Science.gov (United States)

    Gullberg, Rod G

    2008-03-01

    The evaluation of breath alcohol instruments for forensic suitability generally includes the assessment of accuracy, precision, linearity, blood/breath comparisons, etc. Although relevant and important, these methods fail to evaluate other important analytical and biological components related to measurement variability. An experimental design comparing different instruments measuring replicate breath samples from several subjects is presented here. Three volunteers provided n = 10 breath samples into each of six different instruments within an 18 minute time period. Two-way analysis of variance was employed which quantified the between-instrument effect and the subject/instrument interaction. Variance contributions were also determined for the analytical and biological components. Significant between-instrument and subject/instrument interaction were observed. The biological component of total variance ranged from 56% to 98% among all subject instrument combinations. Such a design can help quantify the influence of and optimize breath sampling parameters that will reduce total measurement variability and enhance overall forensic confidence.

  9. Variance of the Quantum Dwell Time for a Nonrelativistic Particle

    Science.gov (United States)

    Hahne, Gerhard

    2012-01-01

    Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.

  10. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  11. The efficiency of the crude oil markets: Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  12. Cosmic variance of the spectral index from mode coupling

    Science.gov (United States)

    Bramante, Joseph; Kumar, Jason; Nelson, Elliot; Shandera, Sarah

    2013-11-01

    We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δns| ~ Script O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.

  13. Cosmic variance of the spectral index from mode coupling

    Energy Technology Data Exchange (ETDEWEB)

    Bramante, Joseph; Kumar, Jason [Department of Physics and Astronomy, University of Hawaii, 2505 Correa Rd., Honolulu HI (United States); Nelson, Elliot; Shandera, Sarah, E-mail: bramante@hawaii.edu, E-mail: jkumar@hawaii.edu, E-mail: eln121@psu.edu, E-mail: shandera@gravity.psu.edu [Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802 (United States)

    2013-11-01

    We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δn{sub s}| ∼ O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.

  14. Constraining the local variance of $H_0$ from directional analyses

    CERN Document Server

    Bengaly, C A P

    2016-01-01

    We evaluate the local variance of the Hubble Constant $H_0$ with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison procedure to test whether the bulk flow motion can reconcile the measurement of the Hubble Constant $H_0$ from standard candles ($H_0 = 73.8 \\pm 2.4 \\; \\mathrm{km \\; s}^{-1}\\; \\mathrm{Mpc}^{-1}$) with that of the Planck's Cosmic Microwave Background data ($67.8 \\pm 0.9 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$). We obtain that $H_0$ ranges from $68.9 \\pm 0.5 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$ to $71.2 \\pm 0.7 \\; \\mathrm{km \\; s}^{-1} \\mathrm{Mpc}^{-1}$ through the celestial sphere, with maximal dipolar anisotropy towards the $(l,b) = (315^{\\circ},27^{\\circ})$ direction. Interestingly, this result is in good agreement with both $H_0$ estimations, as well as the bulk flow direction reported in the literature. In addition, we assess the statistical significance of this variance with different prescriptions of Monte Carlo simulations, finding a goo...

  15. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    Science.gov (United States)

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  16. 78 FR 2986 - Northern Indiana Public Service Company; Notice of Application for Temporary Variance of License...

    Science.gov (United States)

    2013-01-15

    ... Variance of License Article 403 and Soliciting Comments, Motions to Intervene and Protests Take notice that... inspection: a. Application Type: Extension of temporary variance of license article 403. b. Project No: 12514... Commission to grant an extension of time to a temporary variance of license Article 403 that was granted...

  17. 77 FR 52711 - Appalachian Power; Notice of Temporary Variance of License and Soliciting Comments, Motions To...

    Science.gov (United States)

    2012-08-30

    ... Federal Energy Regulatory Commission Appalachian Power; Notice of Temporary Variance of License and...: Temporary Variance of License. b. Project No: 739-033. c. Date Filed: August 7, 2012. d. Applicant... filed. k. Description of Application: The licensee requests a temporary variance to allow for a...

  18. Simple Variance Swaps

    OpenAIRE

    Ian Martin

    2011-01-01

    The large asset price jumps that took place during 2008 and 2009 disrupted volatility derivatives markets and caused the single-name variance swap market to dry up completely. This paper defines and analyzes a simple variance swap, a relative of the variance swap that in several respects has more desirable properties. First, simple variance swaps are robust: they can be easily priced and hedged even if prices can jump. Second, simple variance swaps supply a more accurate measure of market-imp...

  19. [Spatial variance characters of urban synthesis pattern indices at different scales].

    Science.gov (United States)

    Yue, Wenze; Xu, Jianhua; Xu, Lihua; Tan, Wenqi; Mei, Anxin

    2005-11-01

    Scale holds the key to understand pattern-process interactions, and indeed, becomes one of the corner-stone concepts in landscape ecology. Geographic Information System and remote sensing techniques provide an effective tool to characterize the spatial pattern and spatial heterogeneity at different scales. As an example, these techniques are applied to analyze the urban landscape diversity index, contagion index and fractal dimension on the SPOT remote sensing images at four scales. This paper modeled the semivariogram of these three landscape indices at different scales, and the results indicated that the spatial variance characters of diversity index, contagion index and fractal dimension were similar at different scales, which was spatial dependence. The spatial dependence was showed at each scale, the smaller the scale, the stronger the spatial dependence. With the scale reduced, more details of spatial variance were discovered. The contribution of spatial autocorrelation of these three indices to total spatial variance increased gradually, but when the scale was quite small, spatial variance analysis would destroy the interior structure of landscape system. The semivariogram models of different landscape indices were very different at the same scale, illuminating that these models were incomparable at different scales. According to above analyses and based on the study of urban land use structure, 1 km extent was the more perfect scale for studying the spatial variance of urban landscape pattern in Shanghai. The spatial variance of landscape indices had the character of scale-dependence, and was a function of scale. The results differed at different scales we chose, and thus, the influence of scales on pattern could not be neglected in the research of landscape ecology. The changes of these three landscape indices displayed the regularity of urban spatial structure at different scales, i. e., they were complicated and no regularity at small scale, polycentric

  20. Application of Two-factor Variance Analysis Model in the Hall Measurement%双因素方差分析模型在霍尔测量中的应用

    Institute of Scientific and Technical Information of China (English)

    罗明海; 韩亚萍; 张凯; 侯纪伟; 王金鑫; 高雪连

    2011-01-01

    在测量半导体材料的霍尔效应实验中,研究霍尔电压在不同温度和不同磁场强度下变化情况。文章通过建立双因素方差分析数学模型对实验数据进行分析,判断不同的实验条件对实验结果是否有显著影响。%In the experiment of the measurement of semiconductor materials Hall effect,the Hall voltage changes at different temperatures and different magnetic field strengths are often studied,Hall voltage was always fluctuated.This article established the two-factor variance analysis mathematical models to analyze the experimental data,to determine whether the results of the experiment had significant effects under different experimental conditions.

  1. 42 CFR 456.524 - Notification of Administrator's action and duration of variance.

    Science.gov (United States)

    2010-10-01

    ... of variance. 456.524 Section 456.524 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.524 Notification of Administrator's action and duration...

  2. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  3. Estimation of population variance in contributon Monte Carlo

    International Nuclear Information System (INIS)

    Based on the theory of contributons, a new Monte Carlo method known as the contributon Monte Carlo method has recently been developed. The method has found applications in several practical shielding problems. The authors analyze theoretically the variance and efficiency of the new method, by taking moments around the score. In order to compare the contributon game with a game of simple geometrical splitting and also to get the optimal placement of the contributon volume, the moments equations were solved numerically for a one-dimensional, one-group problem using a 10-mfp-thick homogeneous slab. It is found that the optimal placement of the contributon volume is adjacent to the detector; even at its most optimal the contributon Monte Carlo is less efficient than geometrical splitting

  4. Local orbitals by minimizing powers of the orbital variance

    DEFF Research Database (Denmark)

    Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;

    2011-01-01

    It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may be...... encountered. These disappear when the exponent is larger than one. For a small penalty, the occupied orbitals are more local than the virtual ones. When the penalty is increased, the locality of the occupied and virtual orbitals becomes similar. In fact, when increasing the cardinal number for Dunning...

  5. A comparison of variance reduction techniques for radar simulation

    Science.gov (United States)

    Divito, A.; Galati, G.; Iovino, D.

    Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.

  6. Analysis of Efficiency Variances of Thermal Power Industry among China's Provinces with Undesirable Outputs Considered%考虑非理想产出的中国火电行业效率省际差异分析

    Institute of Scientific and Technical Information of China (English)

    曲茜茜; 解百臣; 殷可欣

    2012-01-01

    As the dominant generation form of China’s power industry,thermal power generates undesirable outputs such as carbon emissions,sulfur emissions and auxiliary power by using fossil fuels and labor.To coordinate with the global trend of sustainable development of economy and environment,this paper has analyzed the efficiency variances of thermal power industry among China’s thirty provinces from 2005 to 2009 with the SBM-DEA(Slack Based Measure-Data Envelopment Analysis:SBM-DEA)model,which considers undesirable outputs.Giving more information on slacks and production frontier,this approach can do further analysis on efficiency variances and evaluate the impacts of thermal power industry on regional sustainable development more objectively.In this paper,first,combining with national policies and power structures,we analyzed the reasons for efficiency variances as well as the impacts of resource endowment on thermal power industry;second,we sorted the provinces based on SBM efficiency of 2008 and discussed the dual prices of inputs and outputs;finally,under the assumption of sufficient power supply,we studied the critical factors that can improve the efficiency of thermal power industry and help to coordinate the development of resources and environment.Several valuable conclusions are as follows:1)With the consideration of undesirable outputs,the evaluation results could better satisfy the needs of harmonious and sustainable development of resources and environment;2)Policy reforms such as"establish high-power station while dismantling small plants"and "mine-mouth power plants"influence the efficiency of power systems,and some have achieved remarkable effects;3)Resources endowment determines power structures to a certain extent,which influences the efficiency of power systems as well as the economic development.In short,the efficiency variances of thermal power industry come from different aspects.To achieve the purpose of coordinating

  7. 31 CFR 15.737-16 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... POST EMPLOYMENT CONFLICT OF INTEREST Administrative Enforcement Proceedings § 15.737-16 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the...

  8. 29 CFR 1905.6 - Public notice of a granted variance, limitation, variation, tolerance, or exemption.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Public notice of a granted variance, limitation, variation... SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS... General § 1905.6 Public notice of a granted variance, limitation, variation, tolerance, or...

  9. MEAN SQUARED ERRORS OF BOOTSTRAP VARIANCE ESTIMATORS FOR U-STATISTICS

    OpenAIRE

    Mizuno, Masayuki; Maesono, Yoshihiko

    2011-01-01

    In this paper, we obtain an asymptotic representation of the bootstrap variance estimator for a class of U-statistics. Using the representation of the estimator, we will obtain a mean squared error of the variance estimator until the order n^. Also we compare the bootstrap and the jackknife variance estimators, theoretically.

  10. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  11. Preconditioning of Nonlinear Mixed Effects Models for Stabilisation of Variance-Covariance Matrix Computations.

    Science.gov (United States)

    Aoki, Yasunori; Nordgren, Rikard; Hooker, Andrew C

    2016-03-01

    As the importance of pharmacometric analysis increases, more and more complex mathematical models are introduced and computational error resulting from computational instability starts to become a bottleneck in the analysis. We propose a preconditioning method for non-linear mixed effects models used in pharmacometric analyses to stabilise the computation of the variance-covariance matrix. Roughly speaking, the method reparameterises the model with a linear combination of the original model parameters so that the Hessian matrix of the likelihood of the reparameterised model becomes close to an identity matrix. This approach will reduce the influence of computational error, for example rounding error, to the final computational result. We present numerical experiments demonstrating that the stabilisation of the computation using the proposed method can recover failed variance-covariance matrix computations, and reveal non-identifiability of the model parameters.

  12. Estimating the Variance of the K-Step Ahead Predictor for Time-Series

    OpenAIRE

    Tjärnström, Fredrik

    1999-01-01

    This paper considers the problem of estimating the variance of a linear k-step ahead predictor for time series. (The extension to systems including deterministic inputs is straight forward.) We compare the theoretical results with empirically calculated variance on real data, and discuss the quality of the achieved variance estimate.

  13. Asymptotic accuracy of the jackknife variance estimator for certain smooth statistics

    OpenAIRE

    Gottlieb, Alex D

    2001-01-01

    We show that that the jackknife variance estimator $v_{jack}$ and the the infinitesimal jackknife variance estimator are asymptotically equivalent if the functional of interest is a smooth function of the mean or a smooth trimmed L-statistic. We calculate the asymptotic variance of $v_{jack}$ for these functionals.

  14. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    M. Cosemans

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  15. Estimation of genetic variation in residual variance in female and male broiler chickens

    NARCIS (Netherlands)

    Mulder, H.A.; Hill, W.G.; Vereijken, A.; Veerkamp, R.F.

    2009-01-01

    In breeding programs, robustness of animals and uniformity of end product can be improved by exploiting genetic variation in residual variance. Residual variance can be defined as environmental variance after accounting for all identifiable effects. The aims of this study were to estimate genetic va

  16. A Bound on the Variance of the Waiting Time in a Queueing System

    CERN Document Server

    Eschenfeldt, Patrick; Pippenger, Nicholas

    2011-01-01

    Kingman has shown, under very weak conditions on the interarrival- and sevice-time distributions, that First-Come-First-Served minimizes the variance of the waiting time among possible service disciplines. We show, under the same conditions, that Last-Come-First-Served maximizes the variance of the waiting time, thereby giving an upper bound on the variance among all disciplines.

  17. Estimation of Variance Components on Number of Kids Born in a Composite Goat Population

    OpenAIRE

    Chittima KANTANAMALAKUL; Panwadee SOPANNARATH; Sornthep TUMWASORN

    2010-01-01

    Records on 1,487 parturitions from a composite population of Anglo-Nubian, Saanen, Native and crossbred goats at Yala Livestock Research and Breeding Center, Department of Livestock Development during the years 1995 and 2005 were estimated for variance components and parameters for number of kids born using REML procedure. Single-trait analysis included parity, year-season at kidding, covariates of additive and heterosis breed effects, direct genetic effects, permanent environmental and resi...

  18. A Factorial Analysis of Variance and Resulting Norm Tables for Tennessee Head Start Children Based on the Developmental Test of Visual-Motor Integration.

    Science.gov (United States)

    Nye, Barbara A.

    Data from a statewide screening of Tennessee Head Start children on the Developmental Test of Visual-Motor Integration (VMI) are analyzed in this report for two purposes: to determine whether sex, race, and residence have a significant influence on visual motor development as measured by the VMI, and to develop VMI norms for the Tennessee Head…

  19. Concept design theory and model for multi-use space facilities: Analysis of key system design parameters through variance of mission requirements

    Science.gov (United States)

    Reynerson, Charles Martin

    This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.

  20. The ALHAMBRA survey : Estimation of the clustering signal encoded in the cosmic variance

    CERN Document Server

    López-Sanjuan, C; Hernández-Monteagudo, C; Arnalte-Mur, P; Varela, J; Viironen, K; Fernández-Soto, A; Martínez, V J; Alfaro, E; Ascaso, B; del Olmo, A; Díaz-García, L A; Hurtado-Gil, Ll; Moles, M; Molino, A; Perea, J; Pović, M; Aguerri, J A L; Aparicio-Villegas, T; Benítez, N; Broadhurst, T; Cabrera-Caño, J; Castander, F J; Cepa, J; Cerviño, M; Cristóbal-Hornillos, D; Delgado, R M González; Husillos, C; Infante, L; Márquez, I; Masegosa, J; Prada, F; Quintana, J M

    2015-01-01

    The relative cosmic variance ($\\sigma_v$) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the $\\sigma_v$ measured in the ALHAMBRA survey. We measure the cosmic variance of several galaxy populations selected with $B-$band luminosity at $0.35 \\leq z < 1.05$ as the intrinsic dispersion in the number density distribution derived from the 48 ALHAMBRA subfields. We compare the observational $\\sigma_v$ with the cosmic variance of the dark matter expected from the theory, $\\sigma_{v,{\\rm dm}}$. This provides an estimation of the galaxy bias $b$. The galaxy bias from the cosmic variance is in excellent agreement with the bias estimated by two-point correlation function analysis in ALHAMBRA. This holds for different redshift bins, for red and blue subsamples, and for several ...

  1. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  2. Estimation models of variance components for farrowing interval in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante Neto

    2009-02-01

    Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

  3. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  4. Time Variability of Quasars: the Structure Function Variance

    Science.gov (United States)

    MacLeod, C.; Ivezić, Ž.; de Vries, W.; Sesar, B.; Becker, A.

    2008-12-01

    Significant progress in the description of quasar variability has been recently made by employing SDSS and POSS data. Common to most studies is a fundamental assumption that photometric observations at two epochs for a large number of quasars will reveal the same statistical properties as well-sampled light curves for individual objects. We critically test this assumption using light curves for a sample of ~2,600 spectroscopically confirmed quasars observed about 50 times on average over 8 years by the SDSS stripe 82 survey. We find that the dependence of the mean structure function computed for individual quasars on luminosity, rest-frame wavelength and time is qualitatively and quantitatively similar to the behavior of the structure function derived from two-epoch observations of a much larger sample. We also reproduce the result that the variability properties of radio and X-ray selected subsamples are different. However, the scatter of the variability structure function for fixed values of luminosity, rest-frame wavelength and time is similar to the scatter induced by the variance of these quantities in the analyzed sample. Hence, our results suggest that, although the statistical properties of quasar variability inferred using two-epoch data capture some underlying physics, there is significant additional information that can be extracted from well-sampled light curves for individual objects.

  5. Statistical weights as variance reduction method in back-scattered gamma radiation Monte Carlo spectrometry analysis of thickness gauge detector response

    International Nuclear Information System (INIS)

    The possibility of determining physical quantities (such as the number of particles behind shields of given thickness, energy spectra, detector responses, etc.) with a satisfactory statistical uncertainty, in a relatively short computing time, can be used as a measure of the efficiency of a Monte Carlo method. The numerical simulation of rare events with a straightforward Monte Carlo method is inefficient due to the great number of histories, without scores. In this paper, for the specific geometry of a gamma backscattered thickness gauge, with 60Co and 137Cs as gamma sources, the back-scattered gamma spectrum, probabilities for back-scattering and the spectral characteristics of the detector response were determined using a nonanalog Monte Carlo game with statistical weights applied. (author)

  6. A Comparison of Potential Temperature Variance Budgets over Vegetated and Non-Vegetated Surface

    Science.gov (United States)

    Hang, C.; Nadeau, D.; Jensen, D. D.; Pardyjak, E.

    2015-12-01

    Over the past decades, researchers have achieved a fundamental understanding of the budgets of turbulent variables over simplified and (more recently) complex terrain. However, potential temperature variance budgets, parameterized in most meteorological models, are still poorly understood even under relatively idealized conditions. Although each term of the potential temperature variance budget has been studied over different stabilities and surfaces, a detailed understanding of turbulent heat transport over different types of surfaces is still missing. The objectives of this study are thus: 1) to quantify the significant terms in the potential temperature variance budget equation; 2) to show the variability of the budget terms as a function of height and stability; 3) to model the potential temperature variance decay in the late-afternoon and early-evening periods. To do this, we rely on near-surface turbulence observations collected within the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) program, which was designed to better understand the physics governing processes in mountainous terrain. As part of MATERHORN, large field campaigns were conducted in October 2012 and May 2013 in western Utah. Here, we contrast two field sites: a desert playa (dry lakebed), characterized by a flat surface devoid of vegetation, and a vegetated site, characterized by a low-elevation valley floor covered with greasewood vegetation. As expected, preliminary data analysis reveals that the production and molecular dissipation terms play important roles in the variance budget, however the turbulent transport term is also significant during certain time periods at lower levels (i.e., below 5 m). Our results also show that all three terms decrease with increasing height below 10 m and remain almost constant between 10 m to 25 m, which indicates an extremely shallow surface layer (i.e. 10 m). Further, at all heights and times an imbalance between production and

  7. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik;

    2014-01-01

    -variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative......, which results in a high operating cost. For this case, a two-stage extension of the mean-variance approach provides the best trade-off between the expected cost and its variance. It is demonstrated that by using a constraint back-off technique in the specific case study, certainty equivalence EMPC can......Stochastic linear systems arise in a large number of control applications. This paper presents a mean-variance criterion for economic model predictive control (EMPC) of such systems. The system operating cost and its variance is approximated based on a Monte-Carlo approach. Using convex relaxation...

  8. Age and Gender Differences Associated with Family Communication and Materialism among Young Urban Adult Consumers in Malaysia: A One-Way Analysis of Variance (ANOVA)

    OpenAIRE

    Eric V. Bindah; Md Nor Othman

    2012-01-01

    The main purpose of this study is to examine the differences in age and gender among the various types of family communication patterns that takes place at home among young adult consumers. It is also an attempt to examine if there are differences in age and gender on the development of materialistic values in Malaysia. This paper briefly conceptualizes the family communication processes based on existing literature to illustrate the association between family communication patterns and mater...

  9. Analysis of health trait data from on-farm computer systems in the U.S. I: Pedigree and genomic variance components estimation

    Science.gov (United States)

    With an emphasis on increasing profit through increased dairy cow production, a negative relationship with fitness traits such as fertility and health traits has become apparent. Decreased cow health can impact herd profitability through increased rates of involuntary culling and decreased or lost m...

  10. 公众太阳能光伏发电采纳意愿的差异分析%Variance Analysis of Public's Willingness to Adopt Solar Photovoltaic Power Generation

    Institute of Scientific and Technical Information of China (English)

    丁丽萍; 李文静; 帅传敏

    2015-01-01

    该文基于330份调查问卷数据 ,采用单因素方差分析的Dunnett's t3检验方法 ,对不同年龄段、不同学历层次、不同收入水平、不同性别和不同行业公众的太阳能光伏发电采纳意愿的差异进行了检验 ;然后采用回归模型对影响太阳能光伏发电采纳意愿的人口变量进行了回归分析 .%Using the data from 330 questionnaires ,this paper adopts the single-factor analysis of variance (Dunnett's t3) to test the difference in the public's willingness to accept photovoltaic power generation based on their differences in age ,education background ,income level ,gender and profession .Then ,a re-gression analysis is conducted of the demographic variables affecting the willingness of the public to adopt solar-photovoltaic power generation .Finally ,some policy recommendations are offered based on the re-sults of the above analyses .

  11. THE VARIANCE AND TREND OF INTEREST RATE – CASE OF COMMERCIAL BANKS IN KOSOVO

    Directory of Open Access Journals (Sweden)

    Fidane Spahija

    2015-09-01

    Full Text Available Today’s debate on the interest rate is characterized by three key issues: the interest rate as a phenomenon, the interest rate as a product of factors (dependent variable, and the interest rate as a policy instrument (independent variable. In this article, the variance in interest rates, as the dependent variable, comes in two statistical sizes: the variance and trend. The interest rates include the price of loans and deposits. The analysis of interest rates on deposits and loan is conducted for non-financial corporation and family economy. This study looks into a statistical analysis, to highlight the variance and trends of interest rates for the period 2004-2013, for deposits and loans in commercial banks in Kosovo. The interest rate is observed at various levels. Is it high, medium or low? Does it explain growth trends, keep constant, or reduce? The trend is observed whether commercial banks maintain, reduce, or increase the interest rate in response to the policy that follows the Central Bank of Kosovo. The data obtained will help to determine the impact of interest rate in the service sector, investment, consumption, and unemployment.

  12. Estimation of Variance Components for Litter Size in the First and Later Parities in Improved Jezersko-Solcava Sheep

    Directory of Open Access Journals (Sweden)

    Dubravko Škorput

    2011-12-01

    Full Text Available Aim of this study was to estimate variance components for litter size in Improved Jezersko-Solcava sheep. Analysis involved 66,082 records from 12,969 animals, for the number of lambs born in all parities (BA, the first parity (B1, and later parities (B2+. Fixed part of model contained the effects of season and age at lambing within parity. Random part of model contained the effects of herd, permanent effect (for repeatability models, and additive genetic effect. Variance components were estimated using the restricted maximum likelihood method. The average number of lambs born was 1.36 in the first parity, while the average in later parities was 1.59 leading also to about 20% higher variance. Several models were tested in order to accommodate markedly different variability in litter size between the first and later parities: single trait model (for BA, B1, and B2+, two-trait model (for B1 and B2+, and single trait model with heterogeneous residual variance (for BA. Comparison of variance components between models showed largest differences for the residual variance, resulting in parsimonious fit for a single trait model for BA with heterogeneous residual variance. Correlations among breeding values from different models were high and showed remarkable performance of the standard single trait repeatability model for BA.

  13. Variance and Predictability of Precipitation at Seasonal-to-Interannual Timescales

    Science.gov (United States)

    Koster, Randal D.; Suarez, Max J.; Heiser, Mark

    1999-01-01

    A series of atmospheric general circulation model (AGCM) simulations, spanning a total of several thousand years, is used to assess the impact of land-surface and ocean boundary conditions on the seasonal-to-interannual variability and predictability of precipitation in a coupled modeling system. In the first half of the analysis, which focuses on precipitation variance, we show that the contributions of ocean, atmosphere, and land processes to this variance can be characterized, to first order, with a simple linear model. This allows a clean separation of the contributions, from which we find: (1) land and ocean processes have essentially different domains of influence, i.e., the amplification of precipitation variance by land-atmosphere feedback is most important outside of the regions (mainly in the tropics) that are most affected by sea surface temperatures; and (2) the strength of land-atmosphere feedback in a given region is largely controlled by the relative availability of energy and water there. In the second half of the analysis, the potential for seasonal-to-interannual predictability of precipitation is quantified under the assumption that all relevant surface boundary conditions (in the ocean and on land) are known perfectly into the future. We find that the chaotic nature of the atmospheric circulation imposes fundamental limits on predictability in many extratropical regions. Associated with this result is an indication that soil moisture initialization or assimilation in a seasonal-to-interannual forecasting system would be beneficial mainly in transition zones between dry and humid regions.

  14. Variance bounding Markov chains

    OpenAIRE

    Roberts, Gareth O.; Jeffrey S. Rosenthal

    2008-01-01

    We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.

  15. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  16. Comprehensive Study on the Estimation of the Variance Components of Traverse Nets

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper advances a new simplified formula for estimating variance components ,sums up the basic law to calculate the weights of observed values and a circulation method using the increaments of weights when estimating the variance components of traverse nets,advances the charicteristic roots method to estimate the variance components of traveres nets and presents a practical method to make two real and symmetric matrices two diagonal ones.

  17. Spatiotemporal Dynamics of the Variance of the Wind Velocity from Mini-Sodar Measurements

    Science.gov (United States)

    Krasnenko, N. P.; Kapegesheva, O. F.; Tarasenkov, M. V.; Shamanaeva, L. G.

    2015-12-01

    Statistical trends of the spatiotemporal dynamics of the variance of the three wind velocity components in the atmospheric boundary layer have been established from Doppler mini-sodar measurements. Over the course of a 5-day period of measurements in the autumn time frame from 12 to 16 September 2003, values of the variance of the x- and y-components of the wind velocity lay in the interval from 0.001 to 10 m2/s2, and for the z-component, from 0.001 to 1.2 m2/s2. They were observed to grow during the morning hours (around 11:00 local time) and in the evening (from 18:00 to 22:00 local time), which is explained by the onset of heating and subsequent cooling of the Earth's surface, which are accompanied by an increase in the motion of the air masses. Analysis of the obtained vertical profiles of the standard deviations of the three wind velocity components showed that growth of σ x and σ y with altitude is well described by a power-law dependence with its exponent varying from 0.22 to 1.3 as a function of the time of day while σ z varies according to a linear law. At night (from 00:00 to 5:00 local time) the variance of the z-component changes from 0.01 to 0.56 m2/s2, which is in good agreement with the data available in the literature. Fitting parameters are found and the error of the corresponding fits is estimated, which makes it possible to describe the diurnal dynamics of the wind velocity variance.

  18. Variance reduction in MCMC

    OpenAIRE

    Mira Antonietta; Tenconi Paolo; Bressanini Dario

    2003-01-01

    We propose a general purpose variance reduction technique for MCMC estimators. The idea is obtained by combining standard variance reduction principles known for regular Monte Carlo simulations (Ripley, 1987) and the Zero-Variance principle introduced in the physics literature (Assaraf and Caffarel, 1999). The potential of the new idea is illustrated with some toy examples and an application to Bayesian estimation

  19. Estimation of Variance Components on Number of Kids Born in a Composite Goat Population

    Directory of Open Access Journals (Sweden)

    Chittima KANTANAMALAKUL

    2010-01-01

    Full Text Available Records on 1,487 parturitions from a composite population of Anglo-Nubian, Saanen, Native and crossbred goats at Yala Livestock Research and Breeding Center, Department of Livestock Development during the years 1995 and 2005 were estimated for variance components and parameters for number of kids born using REML procedure. Single-trait analysis included parity, year-season at kidding, covariates of additive and heterosis breed effects, direct genetic effects, permanent environmental and residual effects. The results showed that direct additive breed effects for Anglo-Nubian and Saanen as a measure of deviation from Native goats for the number of kids born were 0.02 and -0.09 heads, respectively. Heterosis breed effects in Anglo-Nubian ´ Native and Saanen ´ Native crosses were positive with increasing numbers of kids born at 0.11 and 0.31 heads, respectively. Direct heritability and permanent environmental variance as a proportion of phenotypic variance for number of kids born were found to be 0.04 and 0.02, respectively.

  20. Patient population management: taking the leap from variance analysis to outcomes measurement.

    Science.gov (United States)

    Allen, K M

    1998-01-01

    Case managers today at BCHS have a somewhat different role than at the onset of the Collaborative Practice Model. They are seen throughout the organization as: Leaders/participants on cross-functional teams. Systems change agents. Integrating/merging with quality services and utilization management. Outcomes managers. One of the major cross-functional teams is in the process of designing a Care Coordinator role. These individuals will, as one of their functions, assume responsibility for daily patient care management activities. A variance tracking program has come into the Utilization Management (UM) department as part of a software package purchased to automate UM work activities. This variance program could potentially be used by the new care coordinators as the role develops. The case managers are beginning to use a Decision Support software, (Transition Systems Inc.) in the collection of data that is based on a cost accounting system and linked to clinical events. Other clinical outcomes data bases are now being used by the case manager to help with the collection and measurement of outcomes information. Hoshin planning will continue to be a framework for defining and setting the targets for clinical and financial improvements throughout the organization. Case managers will continue to be involved in many of these system-wide initiatives. In the words of Galileo, 1579, "You need to count what's countable, measure what's measurable, and what's not measurable, make measurable." PMID:9601411

  1. Testing linear forms of variance components by generalized fixed-level tests

    OpenAIRE

    Weimann, Boris

    1998-01-01

    This report extends the technique of testing single variance components with generalized fixed-level tests - in situations when nuisance parameters make exact testing impossible - to the more general way of testing hypotheses on linear forms of variance components. An extension of the definition of a generalized test variable leads to a generalized fixed-level test for arbitrary linear hypotheses on variance components in balanced mixed linear models of the ANOVA-type. For point null hypothes...

  2. Numerical Inversion with Full Estimation of Variance-Covariance Matrix

    Science.gov (United States)

    Saltogianni, Vasso; Stiros, Stathis

    2016-04-01

    -point, stochastic optimal solutions are computed as the center of gravity of these sets. A full Variance-Covariance Matrix (VCM) of each solution can be directly computed as second statistical moment. The overall method and the software have been tested with synthetic data (accuracy-oriented approach) in the modeling of magma chambers in the Santorini volcano and the modeling of double-fault earthquakes, i.e. to inversion problems with up to 18 unknowns.

  3. Alternatives to F-Test in One Way ANOVA in case of heterogeneity of variances (a simulation study

    Directory of Open Access Journals (Sweden)

    Karl Moder

    2010-12-01

    Full Text Available Several articles deal with the effects of inhomogeneous variances in one way analysis of variance (ANOVA. A very early investigation of this topic was done by Box (1954. He supposed, that in balanced designs with moderate heterogeneity of variances deviations of the empirical type I error rate (on experiments based realized α to the nominal one (predefined α for H0 are small. Similar conclusions are drawn by Wellek (2003. For not so moderate heterogeneity (e.g. σ1:σ2:...=3:1:... Moder (2007 showed, that empirical type I error rate is far beyond the nominal one, even with balanced designs. In unbalanced designs the difficulties get bigger. Several attempts were made to get over this problem. One proposal is to use a more stringent α level (e.g. 2.5% instead of 5% (Keppel & Wickens, 2004. Another recommended remedy is to transform the original scores by square root, log, and other variance reducing functions (Keppel & Wickens, 2004, Heiberger & Holland, 2004. Some authors suggest the use of rank based alternatives to F-test in analysis of variance (Vargha & Delaney, 1998. Only a few articles deal with two or multifactorial designs. There is some evidence, that in a two or multi-factorial design type I error rate is approximately met if the number of factor levels tends to infinity for a certain factor while the number of levels is fixed for the other factors (Akritas & S., 2000, Bathke, 2004.The goal of this article is to find an appropriate location test in an oneway analysis of variance situation with inhomogeneous variances for balanced and unbalanced designs based on a simulation study.

  4. The Multi-allelic Genetic Architecture of a Variance-Heterogeneity Locus for Molybdenum Concentration in Leaves Acts as a Source of Unexplained Additive Genetic Variance.

    Directory of Open Access Journals (Sweden)

    Simon K G Forsberg

    2015-11-01

    Full Text Available Genome-wide association (GWA analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or "missing heritability". Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1, and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975 as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations.

  5. Algorithm of Text Vectors Feature Mining Based on Multi Factor Analysis of Variance%基于多因素方差分析的文本向量特征挖掘算法

    Institute of Scientific and Technical Information of China (English)

    谭海中; 何波

    2015-01-01

    The text feature vector mining applied to information resources organization and management field, in the field of data mining and has great application value, characteristic vector of traditional text mining algorithm using K-means algo⁃rithm , the accuracy is not good. A new method based on multi factor variance analysis of the characteristics of mining algo⁃rithm of text vector. The features used multi factor variance analysis method to obtain a variety of corpora mining rules, based on ant colony algorithm, based on ant colony fitness probability regular training transfer rule, get the evolution of pop⁃ulation of recent data sets obtained effective moment features the maximum probability, the algorithm selects K-means ini⁃tial clustering center based on optimized division, first division of the sample data, then according to the sample distribu⁃tion characteristics to determine the initial cluster center, improve the performance of text feature mining, the simulation re⁃sults show that, this algorithm improves the clustering effect of the text feature vectors, and then improve the performance of feature mining, data feature has higher recall rate and detection rate, time consuming less, greater in the application of data mining in areas such as value.%文本向量特征挖掘应用于信息资源组织和管理领域,在大数据挖掘领域具有较大应用价值,传统算法精度不好。提出一种基于多因素方差分析的文本向量特征挖掘算法。使用多因素方差分析方法得到多种语料库的特征挖掘规律,结合蚁群算法,根据蚁群适应度概率正则训练迁移法则,得到种群进化最近时刻获得的数据集有效特征概率最大值,基于最优划分的K-means初始聚类中心选取算法,先对数据样本进行划分,然后根据样本分布特点来确定初始聚类中心,提高文本特征挖掘性能。仿真结果表明,该算法提高了文本向量特征的聚类效

  6. Variance in population firing rate as a measure of slow time-scale correlation

    Directory of Open Access Journals (Sweden)

    Adam C. Snyder

    2013-12-01

    Full Text Available Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research.

  7. Age-specific patterns of genetic variance in Drosophila melanogaster. I. Mortality

    Energy Technology Data Exchange (ETDEWEB)

    Promislow, D.E.L.; Tatar, M.; Curtsinger, J.W. [Univ. of Minnesota, St. Paul, MN (United States)] [and others

    1996-06-01

    Peter Medawar proposed that senescence arises from an age-related decline in the force of selection, which allows late-acting deleterious mutations to accumulate. Subsequent workers have suggested that mutation accumulation could produce an age-related increase in additive genetic variance (V{sub A}) for fitness traits, as recently found in Drosophila melanogaster. Here we report results from a genetic analysis of mortality in 65,134 D. melanogaster. Additive genetic variance for female mortality rates increases from 0.007 in the first week of life to 0.325 by the third week, and then declines to 0.002 by the seventh week. Males show a similar pattern, though total variance is lower than in females. In contrast to a predicted divergence in mortality curves, mortality curves of different genotypes are roughly parallel. Using a three-parameter model, we find significant V{sub A} for the slope and constant term of the curve describing age-specific mortality rates, and also for the rate at which mortality decelerates late in life. These results fail to support a prediction derived from Medawar`s {open_quotes}mutation accumulation{close_quotes} theory for the evolution of senescence. However, our results could be consistent with alternative interpretations of evolutionary models of aging. 65 refs., 2 figs., 2 tabs.

  8. Selection for uniformity in livestock by exploiting genetic heterogeneity of residual variance

    Directory of Open Access Journals (Sweden)

    Hill William G

    2008-01-01

    Full Text Available Abstract In some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters, breeding goal, number of progeny per sire and breeding scheme on selection responses in mean and variance when applying index selection. Genetic parameters were obtained from the literature. Economic values for the mean and variance were derived for some standard non-linear profit equations, e.g. for traits with an intermediate optimum. The economic value of variance was in most situations negative, indicating that selection for reduced variance increases profit. Predicted responses in residual variance after one generation of selection were large, in some cases when the number of progeny per sire was at least 50, by more than 10% of the current residual variance. Progeny testing schemes were more efficient than sib-testing schemes in decreasing residual variance. With optimum traits, selection pressure shifts gradually from the mean to the variance when approaching the optimum. Genetic improvement of uniformity is particularly interesting for traits where the current population mean is near an intermediate optimum.

  9. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework fo

  10. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    Science.gov (United States)

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  11. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    D. Veestraeten

    2014-01-01

    The discounted stock price under the Constant Elasticity of Variance (CEV) model is a strict local martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the risk-neutral call price and an alternative price that is linked to the unique

  12. Deflation as a Method of Variance Reduction for Estimating the Trace of a Matrix Inverse

    CERN Document Server

    Gambhir, Arjun Singh; Orginos, Kostas

    2016-01-01

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors are random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can b...

  13. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets

  14. View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation

    Science.gov (United States)

    Gong, Jie; Wu, Dong L.

    2013-01-01

    Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.

  15. 46 CFR 69.73 - Variance from the prescribed method of measurement.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Variance from the prescribed method of measurement. 69.73 Section 69.73 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) DOCUMENTATION AND MEASUREMENT OF VESSELS MEASUREMENT OF VESSELS Convention Measurement System § 69.73 Variance from...

  16. 40 CFR 142.304 - For which of the regulatory requirements is a small system variance available?

    Science.gov (United States)

    2010-07-01

    ... requirements is a small system variance available? 142.304 Section 142.304 Protection of Environment... REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.304 For which of the regulatory requirements is a small system variance available? (a) A small system variance is not available under...

  17. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  18. Impact of time-inhomogeneous jumps and leverage type effects on returns and realised variances

    DEFF Research Database (Denmark)

    Veraart, Almut

    This paper studies the effect of time-inhomogeneous jumps and leverage type effects on realised variance calculations when the logarithmic asset price is given by a Lévy-driven stochastic volatility model. In such a model, the realised variance is an inconsistent estimator of the integrated...

  19. 75 FR 6220 - Information Collection Requirements for the Variance Regulations; Submission for Office of...

    Science.gov (United States)

    2010-02-08

    ... Paperwork Reduction Act of 1995 (44 U.S.C. 3506 et seq.) and Secretary of Labor's Order No. 5-2007 (72 FR... Occupational Safety and Health Administration Information Collection Requirements for the Variance Regulations..., experimental, permanent, and national defense variances. DATES: Comments must be submitted...

  20. Trends in the Transitory Variance of Male Earnings: Methods and Evidence

    Science.gov (United States)

    Moffitt, Robert A.; Gottschalk, Peter

    2012-01-01

    We estimate the trend in the transitory variance of male earnings in the United States using the Michigan Panel Study of Income Dynamics from 1970 to 2004. Using an error components model and simpler but only approximate methods, we find that the transitory variance started to increase in the early 1970s, continued to increase through the…

  1. A FORTRAN program for computing the exact variance of weighted kappa.

    Science.gov (United States)

    Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E

    2005-10-01

    An algorithm and associated FORTRAN program are provided for the exact variance of weighted kappa. Program VARKAP provides the weighted kappa test statistic, the exact variance of weighted kappa, a Z score, one-sided lower- and upper-tail N(0,1) probability values, and the two-tail N(0,1) probability value.

  2. Diffusion tensor imaging-derived measures of fractional anisotropy across the pyramidal tract are influenced by the cerebral hemisphere but not by gender in young healthy volunteers: a split-plot factorial analysis of variance

    Institute of Scientific and Technical Information of China (English)

    Ernesto Roldan-Valadez; Edgar Rios-Piedra; Rafael Favila; Sarael Alcauter; Camilo Rios

    2012-01-01

    Background Diffusion tensor imaging (DTI) permits quantitative examination within the pyramidal tract (PT) by measuring fractional anisotropy (FA).To the best of our knowledge,the inter-variability measures of FA along the PT remain unexplained.A clear understanding of these reference values would help radiologists and neuroscientists to understand normality as well as to detect early pathophysiologic changes of brain diseases.The aim of our study was to calculate the variability of the FA at eleven anatomical landmarks along the PT and the influences of gender and cerebral hemisphere in these measurements in a sample of young,healthy volunteers.Methods A retrospective,cross-sectional study was performed in twenty-three right-handed healthy volunteers who underwent magnetic resonance evaluation of the brain.Mean FA values from eleven anatomical landmarks across the PT (at centrum semiovale,corona radiata,posterior limb of internal capsule (PLIC),mesencephalon,pons,and medulla oblongata) were evaluated using split-plot factorial analysis of variance (ANOVA).Results We found a significant interaction effect between anatomical landmark and cerebral hemisphere (F (10,32)=4.516,P=0.001; Wilks' Lambda 0.415,with a large effect size (partial n2=0.585)).The influence of gender end age was non-significant.On average,the midbrain and PLIC FA values were higher than pons and medulla oblongata values; centrum semiovale measurements were higher than those of the corona radiata but lower than PLIC.Conclusions There is a normal variability of FA measurements along PT in healthy individuals,which is influenced by regions of interest location (anatomical landmarks) and cerebral hemisphere.FA measurements should be reported for comparing same-side and same-landmark PT to help avoid comparisons with the contralateral PT; ideally,normative values should exist for a clinically significant age group.A standardized package of selected DTI processing tools would allow DTI processing to be

  3. Biclustering with heterogeneous variance.

    Science.gov (United States)

    Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R

    2013-07-23

    In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637

  4. Variance of size-age curves: Bootstrapping with autocorrelation

    Science.gov (United States)

    Bullock, S.H.; Turner, R.M.; Hastings, J.R.; Escoto-Rodriguez, M.; Lopez, Z.R.A.; Rodrigues-Navarro, J. L.

    2004-01-01

    We modify a method of estimating size-age relations from a minimal set of individual increment data, recognizing that growth depends not only on size but also varies greatly among individuals and is consistent within an individual for several to many time intervals. The method is exemplified with data from a long-lived desert plant and a range of autocorrelation factors encompassing field-measured values. The results suggest that age estimates based on size and growth rates with only moderate autocorrelation are subject to large variation, which raises major problems for prediction or hindcasting for ecological analysis or management.

  5. The impact of news ans the SMP on realized (co)variances in the Eurozone sovereign debt market

    NARCIS (Netherlands)

    R. Beetsma; F. de Jong; M. Giuliodori; D. Widijanto

    2014-01-01

    We use realized variances and covariances based on intraday data from Eurozone sovereign bond market to measure the dependence structure of eurozone sovereign yields. Our analysis focuses on the impact of news, obtained from the Eurointelligence newsash, on the dependence structure. More news raises

  6. An Unbiased Estimator of the Variance of Simple Random Sampling Using Mixed Random-Systematic Sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  7. Cross-rater agreement on common and specific variance of personality scales and items

    OpenAIRE

    Mõttus, René; McCrae, Robert R.; Allik, Jüri; Realo, Anu

    2014-01-01

    Using the NEO Personality Inventory-3, we analyzed self/informant agreement on personality traits at three levels that were made statistically independent from each other: domains, facets, and individual items. Cross-rater correlations for the common variance in the five domains ranged from 0.36 to 0.65 (M = 0.49), whereas estimates for the specific variance of the 30 facets ranged from 0.40 to 0.73 (M = 0.56). Cross-rater correlations of residual variance of individual items ranged from -0.1...

  8. On the expected value and variance for an estimator of the spatio-temporal product density function

    DEFF Research Database (Denmark)

    Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge;

    Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...

  9. Variance partitioning of stream diatom, fish, and invertebrate indicators of biological condition

    Science.gov (United States)

    Zuellig, Robert E.; Carlisle, Daren M.; Meador, Michael R.; Potapova, Marina

    2012-01-01

    Stream indicators used to make assessments of biological condition are influenced by many possible sources of variability. To examine this issue, we used multiple-year and multiple-reach diatom, fish, and invertebrate data collected from 20 least-disturbed and 46 developed stream segments between 1993 and 2004 as part of the US Geological Survey National Water Quality Assessment Program. We used a variance-component model to summarize the relative and absolute magnitude of 4 variance components (among-site, among-year, site × year interaction, and residual) in indicator values (observed/expected ratio [O/E] and regional multimetric indices [MMI]) among assemblages and between basin types (least-disturbed and developed). We used multiple-reach samples to evaluate discordance in site assessments of biological condition caused by sampling variability. Overall, patterns in variance partitioning were similar among assemblages and basin types with one exception. Among-site variance dominated the relative contribution to the total variance (64–80% of total variance), residual variance (sampling variance) accounted for more variability (8–26%) than interaction variance (5–12%), and among-year variance was always negligible (0–0.2%). The exception to this general pattern was for invertebrates at least-disturbed sites where variability in O/E indicators was partitioned between among-site and residual (sampling) variance (among-site  =  36%, residual  =  64%). This pattern was not observed for fish and diatom indicators (O/E and regional MMI). We suspect that unexplained sampling variability is what largely remained after the invertebrate indicators (O/E predictive models) had accounted for environmental differences among least-disturbed sites. The influence of sampling variability on discordance of within-site assessments was assemblage or basin-type specific. Discordance among assessments was nearly 2× greater in developed basins (29–31%) than in least

  10. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Institute of Scientific and Technical Information of China (English)

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).

  11. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Directory of Open Access Journals (Sweden)

    Frank M. You

    2016-04-01

    Full Text Available The type 2 modified augmented design (MAD2 is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html.

  12. Using adapted budget cost variance techniques to measure the impact of Lean – based on empirical findings in Lean case studies

    DEFF Research Database (Denmark)

    Kristensen, Thomas Borup

    2015-01-01

    . This is needed in Lean as the benefits are often created over multiple periods and not just within one budget period. Traditional cost variance techniques are not able to trace these effects. Moreover, Time-driven ABC is adapted to fit the measurement of Lean improvement outside manufacturing and facilitate......Lean is dominating management philosophy, but the management accounting techniques that best supports this is still not fully understood. Especially how Lean fits traditional budget variance analysis, which is a main theme of every management accounting textbook. I have studied three Scandinavian...... excellent Lean performing companies and their development of budget variance analysis techniques. Based on these empirical findings techniques are presented to calculate cost and cost variances in the Lean companies. First of all, a cost variance is developed to calculate the Lean cost benefits within...

  13. Fractal fluctuations and quantum-like chaos in the brain by analysis of variability of brain waves: A new method based on a fractal variance function and random matrix theory: A link with El Naschie fractal Cantorian space-time and V. Weiss and H. Weiss golden ratio in brain

    International Nuclear Information System (INIS)

    We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.

  14. Some new construction methods of variance balanced block designs with repeated blocks

    OpenAIRE

    Ceranka, Bronisław; Graczyk, Małgorzata

    2014-01-01

    Some new construction methods of the variance balanced block designs with repeated blocks are given. They are based on the specialized product of incidence matrices of the balanced incomplete block designs.

  15. Compensatory variances of drug-induced hepatitis B virus YMDD mutations.

    Science.gov (United States)

    Cai, Ying; Wang, Ning; Wu, Xiaomei; Zheng, Kai; Li, Yan

    2016-01-01

    Although the drug-induced mutations of HBV have been ever documented, the evolutionary mechanism is still obscure. To deeply reveal molecular characters of HBV evolution under the special condition, here we made a comprehensive investigation of the molecular variation of the 3432 wild-type sequences and 439 YMDD variants from HBV genotype A, B, C and D, and evaluated the co-variant patterns and the frequency distribution in the different YMDD mutation types and genotypes, by using the naïve Bayes classification algorithm and the complete induction method based on the comparative sequence analysis. The data showed different compensatory changes followed by the rtM204I/V. Although occurrence of the YMDD mutation itself was not related to the HBV genotypes, the subsequence co-variant patterns were related to the YMDD variant types and HBV genotypes. From the hierarchy view, we clarified that historical mutations, drug-induced mutation and compensatory variances, and displayed an inter-conditioned relationship of amino acid variances during multiple evolutionary processes. This study extends the understanding of the polymorphism and fitness of viral protein. PMID:27588233

  16. Ambiguity Aversion and Variance Premium

    OpenAIRE

    Jianjun Miao; Bin Wei; Hao Zhou

    2012-01-01

    This paper offers an ambiguity-based interpretation of variance premium - the differ- ence between risk-neutral and objective expectations of market return variance - as a com- pounding effect of both belief distortion and variance differential regarding the uncertain economic regimes. Our approach endogenously generates variance premium without impos- ing exogenous stochastic volatility or jumps in consumption process. Such a framework can reasonably match the mean variance premium as well a...

  17. Intelligence and Language Proficiency as Sources of Variance in Self-Reported Affective Variables.

    Science.gov (United States)

    Oller, John W., Jr.; Perkins, Kyle

    1978-01-01

    Discusses three possible sources of nonrandom but extraneous variance in self-reported attitude data, and demonstrates that these data may be surreptitious measures of verbal intelligence and language proficiency. (Author/AM)

  18. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Directory of Open Access Journals (Sweden)

    Zdravko Kačič

    2009-01-01

    Full Text Available This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE. The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  19. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Science.gov (United States)

    Kos, Marko; Grašič, Matej; Kačič, Zdravko

    2009-12-01

    This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE). The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  20. Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.

    Science.gov (United States)

    He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne

    2016-04-01

    Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more

  1. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    Science.gov (United States)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  2. Additive and nonadditive genetic variances for milk yield, fertility, and lifetime performance traits of dairy cattle.

    Science.gov (United States)

    Fuerst, C; Sölkner, J

    1994-04-01

    Additive and nonadditive genetic variances were estimated for yield traits and fertility for three subsequent lactations and for lifetime performance traits of purebred and crossbred dairy cattle populations. Traits were milk yield, energy-corrected milk yield, fat percentage, protein percentage, calving interval, length of productive life, and lifetime FCM of purebred Simmental, Simmental including crossbreds, and Braunvieh crossed with Brown Swiss. Data files ranged from 66,740 to 375,093 records. An approach based on pedigree information for sire and maternal grandsire was used and included additive, dominance, and additive by additive genetic effects. Variances were estimated using the tildehat approximation to REML. Heritability estimated without nonadditive effects in the model was overestimated, particularly in presence of additive by additive variance. Dominance variance was important for most traits; for the lifetime performance traits, dominance was clearly higher than additive variance. Additive by additive variance was very high for milk yield and energy-corrected milk yield, especially for data including crossbreds. Effect of inbreeding was low in most cases. Inclusion of nonadditive effects in genetic evaluation models might improve estimation of additive effects and may require consideration for dairy cattle breeding programs.

  3. 78 FR 30914 - Grand River Dam Authority Notice of Application for Temporary Variance of License and Soliciting...

    Science.gov (United States)

    2013-05-23

    ... Variance of License and Soliciting Comments, Motions To Intervene and Protests Take notice that the... inspection: a. Application Type: Temporary variance of license. b. Project No.: 1494-416. c. Date Filed... of Request: Grand River Dam Authority (GRDA) requests a temporary variance, for the year 2013,...

  4. Measurement Error Variance of Test-Day Obervations from Automatic Milking Systems

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik S;

    2012-01-01

    Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined...... in the evaluation model. AMS residual variances were found to be 16 to 37 percent smaller for milk and protein yield and 42 to 47 percent larger for fat yield compared to CMS...

  5. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  6. Statistical modelling of tropical cyclone tracks: a comparison of models for the variance of trajectories

    CERN Document Server

    Hall, T; Hall, Tim; Jewson, Stephen

    2005-01-01

    We describe results from the second stage of a project to build a statistical model for hurricane tracks. In the first stage we modelled the unconditional mean track. We now attempt to model the unconditional variance of fluctuations around the mean. The variance models we describe use a semi-parametric nearest neighbours approach in which the optimal averaging length-scale is estimated using a jack-knife out-of-sample fitting procedure. We test three different models. These models consider the variance structure of the deviations from the unconditional mean track to be isotropic, anisotropic but uncorrelated, and anisotropic and correlated, respectively. The results show that, of these models, the anisotropic correlated model gives the best predictions of the distribution of future positions of hurricanes.

  7. Detection of rheumatoid arthritis by evaluation of normalized variances of fluorescence time correlation functions

    Science.gov (United States)

    Dziekan, Thomas; Weissbach, Carmen; Voigt, Jan; Ebert, Bernd; MacDonald, Rainer; Bahner, Malte L.; Mahler, Marianne; Schirner, Michael; Berliner, Michael; Berliner, Birgitt; Osel, Jens; Osel, Ilka

    2011-07-01

    Fluorescence imaging using the dye indocyanine green as a contrast agent was investigated in a prospective clinical study for the detection of rheumatoid arthritis. Normalized variances of correlated time series of fluorescence intensities describing the bolus kinetics of the contrast agent in certain regions of interest were analyzed to differentiate healthy from inflamed finger joints. These values are determined using a robust, parameter-free algorithm. We found that the normalized variance of correlation functions improves the differentiation between healthy joints of volunteers and joints with rheumatoid arthritis of patients by about 10% compared to, e.g., ratios of areas under the curves of raw data.

  8. Semilogarithmic Nonuniform Vector Quantization of Two-Dimensional Laplacean Source for Small Variance Dynamics

    Directory of Open Access Journals (Sweden)

    Z. Peric

    2012-04-01

    Full Text Available In this paper high dynamic range nonuniform two-dimensional vector quantization model for Laplacean source was provided. Semilogarithmic A-law compression characteristic was used as radial scalar compression characteristic of two-dimensional vector quantization. Optimal number value of concentric quantization domains (amplitude levels is expressed in the function of parameter A. Exact distortion analysis with obtained closed form expressions is provided. It has been shown that proposed model provides high SQNR values in wide range of variances, and overachieves quality obtained by scalar A-law quantization at same bit rate, so it can be used in various switching and adaptation implementations for realization of high quality signal compression.

  9. Optimal Investment and Consumption Decisions under the Constant Elasticity of Variance Model

    Directory of Open Access Journals (Sweden)

    Hao Chang

    2013-01-01

    Full Text Available We consider an investment and consumption problem under the constant elasticity of variance (CEV model, which is an extension of the original Merton’s problem. In the proposed model, stock price dynamics is assumed to follow a CEV model and our goal is to maximize the expected discounted utility of consumption and terminal wealth. Firstly, we apply dynamic programming principle to obtain the Hamilton-Jacobi-Bellman (HJB equation for the value function. Secondly, we choose power utility and logarithm utility for our analysis and apply variable change technique to obtain the closed-form solutions to the optimal investment and consumption strategies. Finally, we provide a numerical example to illustrate the effect of market parameters on the optimal investment and consumption strategies.

  10. 40 CFR 142.309 - What are the public meeting requirements associated with the proposal of a small system variance?

    Science.gov (United States)

    2010-07-01

    ... requirements associated with the proposal of a small system variance? 142.309 Section 142.309 Protection of... WATER REGULATIONS IMPLEMENTATION Variances for Small System Public Participation § 142.309 What are the public meeting requirements associated with the proposal of a small system variance? (a) A State or...

  11. Application of variance reduction techniques in Monte Carlo simulation of clinical electron linear accelerator

    Science.gov (United States)

    Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.

    2012-01-01

    Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.

  12. Splitting the variance of statistical learning performance: A parametric investigation of exposure duration and transitional probabilities.

    Science.gov (United States)

    Bogaerts, Louisa; Siegelman, Noam; Frost, Ram

    2016-08-01

    What determines individuals' efficacy in detecting regularities in visual statistical learning? Our theoretical starting point assumes that the variance in performance of statistical learning (SL) can be split into the variance related to efficiency in encoding representations within a modality and the variance related to the relative computational efficiency of detecting the distributional properties of the encoded representations. Using a novel methodology, we dissociated encoding from higher-order learning factors, by independently manipulating exposure duration and transitional probabilities in a stream of visual shapes. Our results show that the encoding of shapes and the retrieving of their transitional probabilities are not independent and additive processes, but interact to jointly determine SL performance. The theoretical implications of these findings for a mechanistic explanation of SL are discussed. PMID:26743060

  13. 25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the... variance from the standards of the part? (a) Tribal gaming regulatory authority approval. (1) A Tribal gaming regulatory authority may approve a variance for a gaming operation if it has determined that...

  14. Estimation of bias and variance of measurements made from tomography scans

    Science.gov (United States)

    Bradley, Robert S.

    2016-09-01

    Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.

  15. Nonparametric Estimation of Mean and Variance and Pricing of Securities Nonparametric Estimation of Mean and Variance and Pricing of Sec

    Directory of Open Access Journals (Sweden)

    Akhtar R. Siddique

    2000-03-01

    Full Text Available This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices. This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices.

  16. Lower within-community variance of negative density dependence increases forest diversity.

    Directory of Open Access Journals (Sweden)

    António Miranda

    Full Text Available Local abundance of adult trees impedes growth of conspecific seedlings through host-specific enemies, a mechanism first proposed by Janzen and Connell to explain plant diversity in forests. While several studies suggest the importance of this mechanism, there is still little information of how the variance of negative density dependence (NDD affects diversity of forest communities. With computer simulations, we analyzed the impact of strength and variance of NDD within tree communities on species diversity. We show that stronger NDD leads to higher species diversity. Furthermore, lower range of strengths of NDD within a community increases species richness and decreases variance of species abundances. Our results show that, beyond the average strength of NDD, the variance of NDD is also crucially important to explain species diversity. This can explain the dissimilarity of biodiversity between tropical and temperate forest: highly diverse forests could have lower NDD variance. This report suggests that natural enemies and the variety of the magnitude of their effects can contribute to the maintenance of biodiversity.

  17. Variance, Violence, and Democracy: A Basic Microeconomic Model of Terrorism

    Directory of Open Access Journals (Sweden)

    John A. Sautter

    2010-01-01

    Full Text Available Much of the debate surrounding contemporary studies of terrorism focuses upon transnational terrorism. However, historical and contemporary evidence suggests that domestic terrorism is a more prevalent and pressing concern. A formal microeconomic model of terrorism is utilized here to understand acts of political violence in a domestic context within the domain of democratic governance.This article builds a very basic microeconomic model of terrorist decision making to hypothesize how a democratic government might influence the sorts of strategies that terrorists use. Mathematical models have been used to explain terrorist behavior in the past. However, the bulk of inquires in this area have only focused on the relationship between terrorists and the government, or amongst terrorists themselves. Central to the interpretation of the terrorist conflict presented here is the idea that voters (or citizens are also one of the important determinants of how a government will respond to acts of terrorism.

  18. Occupation time fluctuation limits of infinite variance equilibrium branching systems

    OpenAIRE

    Milos, Piotr

    2008-01-01

    We establish limit theorems for the fluctuations of the rescaled occupation time of a $(d,\\alpha,\\beta)$-branching particle system. It consists of particles moving according to a symmetric $\\alpha$-stable motion in $\\mathbb{R}^d$. The branching law is in the domain of attraction of a (1+$\\beta$)-stable law and the initial condition is an equilibrium random measure for the system (defined below). In the paper we treat separately the cases of intermediate $\\alpha/\\beta

  19. Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models

    Science.gov (United States)

    Yoder, Dennis A.

    2016-01-01

    This paper will include a detailed comparison of heat transfer models that rely upon the thermal diffusivity. The goals are to inform users of the development history of the various models and the resulting differences in model formulations, as well as to evaluate the models on a variety of validation cases so that users might better understand which models are more broadly applicable.

  20. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A;

    2007-01-01

    possible common genetic factors of the traits. CONCLUSIONS: The heritabilities of apolipoprotein B and E, cholesterol, LDL, and high density lipoprotein (HDL) were significant in the general population, although gender-specific levels and significance were detected. Heritabilities of apolipoprotein A1...

  1. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    to open the “black box” by building SNP set genomic models that evaluate the collective action of multiple SNPs in genes, biological pathways or other external biological findings on the trait phenotype. As a proof of concept we have tested the modelling framework on susceptibility to mastitis in dairy...... variants influence complex diseases. Despite the successes, the variants identified as being statistically significant have generally explained only a small fraction of the heritable component of the trait, the so-called problem of missing heritability. Insufficient modelling of the underlying genetic...... that the associated genetic variants are enriched for genes that are connected in biol ogical pathways or for likely functional effects on genes. These biological findings provide valuable insight for developing better genomic models. These are statistical models for predicting complex trait phenotypes on the basis...

  2. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    2013-01-01

    to open the “black box” by building SNP set genomic models that evaluate the collective action of multiple SNPs in genes, biological pathways or other external biological findings on the trait phenotype. As a proof of concept we have tested the modelling framework on susceptibility to mastitis in dairy...... variants influence complex diseases. Despite the successes, the variants identified as being statistically significant have generally explained only a small fraction of the heritable component of the trait, the so-called problem of missing heritability. Insufficient modelling of the underlying genetic...... that the associated genetic variants are enriched for genes that are connected in biol ogical pathways or for likely functional effects on genes. These biological findings provide valuable insight for developing better genomic models. These are statistical models for predicting complex trait phenotypes on the basis...

  3. Time Variability of Quasars: the Structure Function Variance

    CERN Document Server

    MacLeod, C; De Vries, W; Sesar, B; Becker, A

    2008-01-01

    Significant progress in the description of quasar variability has been recently made by employing SDSS and POSS data. Common to most studies is a fundamental assumption that photometric observations at two epochs for a large number of quasars will reveal the same statistical properties as well-sampled light curves for individual objects. We critically test this assumption using light curves for a sample of $\\sim$2,600 spectroscopically confirmed quasars observed about 50 times on average over 8 years by the SDSS stripe 82 survey. We find that the dependence of the mean structure function computed for individual quasars on luminosity, rest-frame wavelength and time is qualitatively and quantitatively similar to the behavior of the structure function derived from two-epoch observations of a much larger sample. We also reproduce the result that the variability properties of radio and X-ray selected subsamples are different. However, the scatter of the variability structure function for fixed values of luminosity...

  4. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, K.; Sørensen, T.I.A.;

    2007-01-01

    Diffusion weighted imaging (DWI) and tractography allow the non-invasive study of anatomical brain connectivity. However, a gold standard for validating tractography of complex connections is lacking. Using the porcine brain as a highly gyrated brain model, we quantitatively and qualitatively...

  5. Examination of Academic Self-Regulation Variances in Nursing Students

    Science.gov (United States)

    Schutt, Michelle A.

    2009-01-01

    Multiple workforce demands in healthcare have placed a tremendous amount of pressure on academic nurse educators to increase the number of professional nursing graduates to provide nursing care both in both acute and non-acute healthcare settings. Increased enrollment in nursing programs throughout the United States is occurring; however, due to…

  6. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per;

    2012-01-01

    traits such as mammary disease traits in dairy cattle. METHODS: Data on progeny means of six traits related to mastitis resistance in dairy cattle (general mastitis resistance and five pathogen-specific mastitis resistance traits) were analyzed using a bivariate Bayesian SNP-based genomic model......)variances of mastitis resistance traits in dairy cattle using multivariate genomic models......., per chromosome, and in regions of 100 SNP on a chromosome. RESULTS: Genomic proportions of the total variance differed between traits. Genomic correlations were lower than pedigree-based genetic correlations and they were highest between general mastitis and pathogen-specific traits because...

  7. The genetic and environmental roots of variance in negativity toward foreign nationals.

    Science.gov (United States)

    Kandler, Christian; Lewis, Gary J; Feldhaus, Lea Henrike; Riemann, Rainer

    2015-03-01

    This study quantified genetic and environmental roots of variance in prejudice and discriminatory intent toward foreign nationals and examined potential mediators of these genetic influences: right-wing authoritarianism (RWA), social dominance orientation (SDO), and narrow-sense xenophobia (NSX). In line with the dual process motivational (DPM) model, we predicted that the two basic attitudinal and motivational orientations-RWA and SDO-would account for variance in out-group prejudice and discrimination. In line with other theories, we expected that NSX as an affective component would explain additional variance in out-group prejudice and discriminatory intent. Data from 1,397 individuals (incl. twins as well as their spouses) were analyzed. Univariate analyses of twins' and spouses' data yielded genetic (incl. contributions of assortative mating) and multiple environmental sources (i.e., social homogamy, spouse-specific, and individual-specific effects) of variance in negativity toward strangers. Multivariate analyses suggested an extension to the DPM model by including NSX in addition to RWA and SDO as predictor of prejudice and discrimination. RWA and NSX primarily mediated the genetic influences on the variance in prejudice and discriminatory intent toward foreign nationals. In sum, the findings provide the basis of a behavioral genetic framework integrating different scientific disciplines for the study of negativity toward out-groups. PMID:25534512

  8. Mean-Variance Efficiency of the Market Portfolio

    Directory of Open Access Journals (Sweden)

    Rafael Falcão Noda

    2014-06-01

    Full Text Available The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i the efficient frontier intersects with the market portfolio and (ii the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the adjusted parameters are not significantly different from the sample parameters, in line with the results of Levy and Roll (2010 for the USA stock market. Such results suggest that the imprecisions in the implementation of the CAPM stem mostly from parameter estimation errors and that other explanatory factors for returns may have low relevance. Therefore, our results contradict the above-mentioned criticisms to the CAPM in Brazil.

  9. Investigations of oligonucleotide usage variance within and between prokaryotes

    DEFF Research Database (Denmark)

    Bohlin, J.; Skjerve, E.; Ussery, David

    2008-01-01

    of different DNA 'word-sizes' and explore how oligonucleotide frequencies differ in coding and non-coding regions. In addition, we used oligonucleotide frequencies to investigate DNA composition and how DNA sequence patterns change within and between prokaryotic organisms. Among the results found...

  10. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Science.gov (United States)

    2010-04-28

    ... chimney construction to raise or lower workers in personnel cages, personnel platforms, and boatswain's... OSHA-approved State Plans covering private-sector employers that have identical standards and agree to.... Background In the past 36 years, a number of chimney construction companies demonstrated to OSHA that...

  11. Reproducing static and dynamic biodiversity patterns in tropical forests: the critical role of environmental variance.

    Science.gov (United States)

    Fung, Tak; O'Dwyer, James P; Rahman, Kassim Abd; Fletcher, Christine D; Chisholm, Ryan A

    2016-05-01

    Ecological communities are subjected to stochasticity in the form of demographic and environmental variance. Stochastic models that contain only demographic variance (neutral models) provide close quantitative fits to observed species-abundance distributions (SADs) but substantially underestimate observed temporal species-abundance fluctuations. To provide a holistic assessment of whether models with demographic and environmental variance perform better than neutral models, the fit of both to SADs and temporal species-abundance fluctuations at the same time has to be tested quantitatively. In this study, we quantitatively test how closely a model with demographic and environmental variance reproduces total numbers of species, total abundances, SADs and temporal species-abundance fluctuations for two tropical forest tree communities, using decadal data from long-term monitoring plots and considering individuals larger than two size thresholds for each community. We find that the model can indeed closely reproduce these static and dynamic patterns of biodiversity in the two communities for the two size thresholds, with better overall fits than corresponding neutral models. Therefore, our results provide evidence that stochastic models incorporating demographic and environmental variance can simultaneously capture important static and dynamic biodiversity patterns arising in tropical forest communities. PMID:27349097

  12. The Quantum Allan Variance

    OpenAIRE

    Chabuda, Krzysztof; Leroux, Ian; Demkowicz-Dobrzanski, Rafal

    2016-01-01

    In atomic clocks, the frequency of a local oscillator is stabilized based on the feedback signal obtained by periodically interrogating an atomic reference system. The instability of the clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbi...

  13. Asymptotic behavior of the variance of the EWMA statistic for autoregressive processes

    OpenAIRE

    Vermaat, T.M.B.; Meulen, van der, N.; Does, R.J.M.M.

    2010-01-01

    Asymptotic behavior of the variance of the EWMA statistic for autoregressive processes correspondance: Corresponding author. Tel.: +31 20 5255203; fax: +31 20 5255101. (Vermaat, M.B.) (Vermaat, M.B.) Institute for Business and Industrial Statistics of the University of Amsterdam--> , IBIS UvA--> - NETHERLANDS (Vermaat, M.B.) Institute for Business and Industrial Statistics of the University of Amst...

  14. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  15. Quantum variance: A measure of quantum coherence and quantum correlations for many-body systems

    Science.gov (United States)

    Frérot, Irénée; Roscilde, Tommaso

    2016-08-01

    Quantum coherence is a fundamental common trait of quantum phenomena, from the interference of matter waves to quantum degeneracy of identical particles. Despite its importance, estimating and measuring quantum coherence in generic, mixed many-body quantum states remains a formidable challenge, with fundamental implications in areas as broad as quantum condensed matter, quantum information, quantum metrology, and quantum biology. Here, we provide a quantitative definition of the variance of quantum coherent fluctuations (the quantum variance) of any observable on generic quantum states. The quantum variance generalizes the concept of thermal de Broglie wavelength (for the position of a free quantum particle) to the space of eigenvalues of any observable, quantifying the degree of coherent delocalization in that space. The quantum variance is generically measurable and computable as the difference between the static fluctuations and the static susceptibility of the observable; despite its simplicity, it is found to provide a tight lower bound to most widely accepted estimators of "quantumness" of observables (both as a feature as well as a resource), such as the Wigner-Yanase skew information and the quantum Fisher information. When considering bipartite fluctuations in an extended quantum system, the quantum variance expresses genuine quantum correlations among the two parts. In the case of many-body systems, it is found to obey an area law at finite temperature, extending therefore area laws of entanglement and quantum fluctuations of pure states to the mixed-state context. Hence the quantum variance paves the way to the measurement of macroscopic quantum coherence and quantum correlations in most complex quantum systems.

  16. Maximum Posterior Adjustment of Extended Network and Estimation Formulae of Its Variance Components

    Institute of Scientific and Technical Information of China (English)

    汪善根

    2001-01-01

    This paper derives the maximum posterior adjustment formulae of the extended network and the estimation formulaes of variance components of Helmert, Welsch and Frstner types when there are two types of uncorrelated observations in it, and perfects the theory of the maximum posterior adjustment.

  17. Estimation of Genetic Variance Components Including Mutation and Epistasis using Bayesian Approach in a Selection Experiment on Body Weight in Mice

    DEFF Research Database (Denmark)

    Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke

    selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...

  18. Variance-reduced particle simulation of the Boltzmann transport equation in the relaxation-time approximation.

    Science.gov (United States)

    Radtke, Gregg A; Hadjiconstantinou, Nicolas G

    2009-05-01

    We present an efficient variance-reduced particle simulation technique for solving the linearized Boltzmann transport equation in the relaxation-time approximation used for phonon, electron, and radiative transport, as well as for kinetic gas flows. The variance reduction is achieved by simulating only the deviation from equilibrium. We show that in the limit of small deviation from equilibrium of interest here, the proposed formulation achieves low relative statistical uncertainty that is also independent of the magnitude of the deviation from equilibrium, in stark contrast to standard particle simulation methods. Our results demonstrate that a space-dependent equilibrium distribution improves the variance reduction achieved, especially in the collision-dominated regime where local equilibrium conditions prevail. We also show that by exploiting the physics of relaxation to equilibrium inherent in the relaxation-time approximation, a very simple collision algorithm with a clear physical interpretation can be formulated. PMID:19518597

  19. EMPIRICAL COMPARISON OF VARIOUS APPROXIMATE ESTIMATORS OF THE VARIANCE OF HORVITZ THOMPSON ESTIMATOR UNDER SPLIT METHOD OF SAMPLING

    Directory of Open Access Journals (Sweden)

    Neeraj Tiwari

    2014-06-01

    Full Text Available Under inclusion probability proportional to size (IPPS sampling, the exact secondorder inclusion probabilities are often very difficult to obtain, and hence variance of the Horvitz- Thompson estimator and Sen-Yates-Grundy estimate of the variance of Horvitz-Thompson estimator are difficult to compute. Hence the researchers developed some alternative variance estimators based on approximations of the second-order inclusion probabilities in terms of the first order inclusion probabilities. We have numerically compared the performance of the various alternative approximate variance estimators using the split method of sample selection

  20. Explicit Representation of the Minimal Variance Portfolio in Markets driven by Lévy Processes.

    OpenAIRE

    2001-01-01

    In a market driven by a Lévy martingale, we consider a claim x. We study the problem of minimal variance hedging and we give an explicit formula for the minimal variance portfolio in terms of Malliavin derivatives. We discuss two types of stochastic (Malliavin) derivatives for x: one based on the chaos expansion in terms of iterated integrals with respect to the power jump processes and one based on the chaos expansion in terms of iterated integrals with respect to the Wiener process and the ...

  1. LOCALLY RISK-NEUTRAL VALUATION OF OPTIONS IN GARCH MODELS BASED ON VARIANCE-GAMMA PROCESS

    OpenAIRE

    LIE-JANE KAO

    2012-01-01

    This study develops a GARCH-type model, i.e., the variance-gamma GARCH (VG GARCH) model, based on the two major strands of option pricing literature. The first strand of the literature uses the variance-gamma process, a time-changed Brownian motion, to model the underlying asset price process such that the possible skewness and excess kurtosis on the distributions of asset returns are considered. The second strand of the literature considers the propagation of the previously arrived news by i...

  2. 77 FR 73632 - American Municipal Power, Inc; Notice of Application for Temporary Variance of License and...

    Science.gov (United States)

    2012-12-11

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission American Municipal Power, Inc; Notice of Application for Temporary Variance of License and Soliciting Comments, Motions To Intervene, and Protests Take notice that the...

  3. Quantitative milk genomics: estimation of variance components and prediction of fatty acids in bovine milk

    DEFF Research Database (Denmark)

    Krag, Kristian

    The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...

  4. The ALHAMBRA survey: Estimation of the clustering signal encoded in the cosmic variance

    Science.gov (United States)

    López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Arnalte-Mur, P.; Varela, J.; Viironen, K.; Fernández-Soto, A.; Martínez, V. J.; Alfaro, E.; Ascaso, B.; del Olmo, A.; Díaz-García, L. A.; Hurtado-Gil, Ll.; Moles, M.; Molino, A.; Perea, J.; Pović, M.; Aguerri, J. A. L.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; González Delgado, R. M.; Husillos, C.; Infante, L.; Márquez, I.; Masegosa, J.; Prada, F.; Quintana, J. M.

    2015-10-01

    Aims: The relative cosmic variance (σv) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the σv measured in the ALHAMBRA survey. Methods: We measure the cosmic variance of several galaxy populations selected with B-band luminosity at 0.35 ≤ zCSIC).

  5. The Evolution of Human Intelligence and the Coefficient of Additive Genetic Variance in Human Brain Size

    Science.gov (United States)

    Miller, Geoffrey F.; Penke, Lars

    2007-01-01

    Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…

  6. 低咖啡碱茶树遗传群体的咖啡碱含量与分子变异分析%Analysis of Caffeine Content and Molecular Variance of Low-caffeine Tea Plants

    Institute of Scientific and Technical Information of China (English)

    王雪敏; 姚明哲; 金基强; 马春雷; 陈亮

    2012-01-01

    对60个低咖啡碱单株进行咖啡碱含量的HPLC测定和42对EST-SSR引物的分子变异分析.结果表明茶叶中咖啡碱鲜重的变化范围是0.38%~1.08%,低于亲本咖啡碱含量的单株有5份,符合低咖啡碱茶筛选的目标.42对EST-SSR引物在遗传群体中共检测出129个等位基因,每对引物可检测出2~7个等位基因,平均3.86个,平均Shannon-Weaver指数(I)为0.65;标记的多态性信息含量(PIC)平均为0.33,变化范围是0.03~0.68.初步鉴定出3个与咖啡碱变异相关联的分子标记,对低咖啡碱优异基因的筛选及低咖啡碱茶树新品种的选育具有一定的指导意义.%The caffeine content of 60 individuals of a low-caffeine population of tea plant was analyzed using HPLC and the molecular variance was studied using 42 EST-SSR markers, respectively. The results showed that the caffeine content of fresh weight ranged from 0.38% to 1.08%. Five individuals had lower caffeine content than female parent, and they can be used as breeding materials for further screening low-caffeine tea cultivars. One hundred and twenty-nine alleles were detected, each pair of primers could detect 2 to 7 alleles, an average of 3.86. The average number of Shannon-Weaver index (7) was 0.65. The polymorphism information content (PIC) of EST-SSR markers varied from 0.03 to 0.68, with average of 0.33. Three SSR markers, TM089, TM200 and TM211, related to variation of caffeine content were preliminary identified. It would be of important significance for screening excellent genetic resources and breeding new low-caffeine tea cultivars.

  7. An empirical study of statistical properties of variance partition coefficients for multi-level logistic regression models

    Science.gov (United States)

    Li, J.; Gray, B.R.; Bates, D.M.

    2008-01-01

    Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.

  8. DNS of channel flow with conjugate heat transfer - Budgets of turbulent heat fluxes and temperature variance

    OpenAIRE

    Flageul Cédric, Benhamadouche Sofiane, Lamballais Éric, Laurence Dominique.

    2014-01-01

    The present work provides budgets of turbulent heat fluxes and temperature variance for a channel flow with different thermal boundary conditions: an imposed temperature, an imposed heat flux and with conjugate heat transfer combined with an imposed heat flux at the outer wall.

  9. The efficiency of the crude oil markets: Evidence from variance ratio tests

    OpenAIRE

    Charles, Amélie; Darné, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multip...

  10. The modified Black-Scholes model via constant elasticity of variance for stock options valuation

    Science.gov (United States)

    Edeki, S. O.; Owoloko, E. A.; Ugbebor, O. O.

    2016-02-01

    In this paper, the classical Black-Scholes option pricing model is visited. We present a modified version of the Black-Scholes model via the application of the constant elasticity of variance model (CEVM); in this case, the volatility of the stock price is shown to be a non-constant function unlike the assumption of the classical Black-Scholes model.

  11. A more realistic estimate of the variances and systematic errors in spherical harmonic geomagnetic field models

    DEFF Research Database (Denmark)

    Lowes, F.J.; Olsen, Nils

    2004-01-01

    Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...

  12. A Mean-Variance Diagnosis of the Financial Crisis: International Diversification and Safe Havens

    Directory of Open Access Journals (Sweden)

    Alexander Eptas

    2010-12-01

    Full Text Available We use mean-variance analysis with short selling constraints to diagnose the effects of the recent global financial crisis by evaluating the potential benefits of international diversification in the search for ‘safe havens’. We use stock index data for a sample of developed, advanced-emerging and emerging countries. ‘Text-book’ results are obtained for the pre-crisis analysis with the optimal portfolio for any risk-averse investor being obtained as the tangency portfolio of the All-Country portfolio frontier. During the crisis there is a disjunction between bank lending and stock markets revealed by negative average returns and an absence of any empirical Capital Market Line. Israel and Colombia emerge as the safest havens for any investor during the crisis. For Israel this may reflect the protection afforded by special trade links and diaspora support, while for Colombia we speculate that this reveals the impact on world financial markets of the demand for cocaine.

  13. Adding a Parameter Increases the Variance of an Estimated Regression Function

    Science.gov (United States)

    Withers, Christopher S.; Nadarajah, Saralees

    2011-01-01

    The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…

  14. 研究弱视眼瞳孔直径变化探讨其视觉通路的功能%Study on function of visual pathway among amblyopes through analysis of pupil size variance

    Institute of Scientific and Technical Information of China (English)

    张燕; 陈洁; 吕帆

    2009-01-01

    目的 比较弱视儿童的弱视眼与对侧健眼的瞳孔直径在不同光照条件下的变化差异,分析其视觉通路功能与对侧健眼的差异.方法 随机选择30例单眼弱视儿童,包括屈光参差性弱视17例、屈光不正性弱视8例以及斜视性弱视5例,其中中度弱视13例,轻度弱视17例.所有患者弱视眼矫正视力0.2~0.8,所有患者对侧健眼矫正视力0.9~1.2.应用照相机拍摄法在普通光照条件、直接对光条件、间接对光条件及夜间环境条件下,让受试者分别注视远距和近距E视标,来测量弱视眼与对侧健眼的瞳孔直径,比较其差异.结果在四种不同光照条件下,受试者在注视远距及近距E视标时,其弱视眼组的瞳孔直径均大于对侧健眼组(P<0.05),弱视眼在看远看近两种状态下,直接对光反射时的瞳孔直径均大于间接对光反射时的瞳孔直径;弱视眼在间接对光反射时的瞳孔直径均大于健眼直接对光反射时的瞳孔直径;弱视眼直接对光反射时的瞳孔直径均大于健眼间接对光反射时的瞳孔直径,差异均有统计学意义(P<0.05).结论 弱视眼视网膜功能较对侧健眼要降低,弱视眼的近反射中枢(枕叶视皮质)功能可能较对侧健眼下降,并且E-W核功能下降是弱视眼瞳孔直径异于对侧健眼的主要原因,从而推断弱视在发生发展的过程中,弱视眼的视网膜、视觉传导通路及视皮层均存在损害.%Objective Under different four illumination conditions,comparing the variance of pupil size between the amblyopia eyes and the contralateral normal eyes in monocular amblyopia children to analyze the variance in function of visual pathway of the amblyopia eyes. Methods 30 monocular amblyopia children were selected in random which is including 17 anisometropic amblyopia children,8 ametropia amblyopia children and 5 strabismic amblyopia children,in which of them,there 're 13 Moderate amblyopia and 17 mild amblyopia

  15. Application of variance reduction technique to nuclear transmutation system driven by accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)

  16. A comprehensive study of the delay vector variance method for quantification of nonlinearity in dynamical systems.

    Science.gov (United States)

    Jaksic, V; Mandic, D P; Ryan, K; Basu, B; Pakrashi, V

    2016-01-01

    Although vibration monitoring is a popular method to monitor and assess dynamic structures, quantification of linearity or nonlinearity of the dynamic responses remains a challenging problem. We investigate the delay vector variance (DVV) method in this regard in a comprehensive manner to establish the degree to which a change in signal nonlinearity can be related to system nonlinearity and how a change in system parameters affects the nonlinearity in the dynamic response of the system. A wide range of theoretical situations are considered in this regard using a single degree of freedom (SDOF) system to obtain numerical benchmarks. A number of experiments are then carried out using a physical SDOF model in the laboratory. Finally, a composite wind turbine blade is tested for different excitations and the dynamic responses are measured at a number of points to extend the investigation to continuum structures. The dynamic responses were measured using accelerometers, strain gauges and a Laser Doppler vibrometer. This comprehensive study creates a numerical and experimental benchmark for structurally dynamical systems where output-only information is typically available, especially in the context of DVV. The study also allows for comparative analysis between different systems driven by the similar input. PMID:26909175

  17. A comprehensive study of the delay vector variance method for quantification of nonlinearity in dynamical systems

    Science.gov (United States)

    Mandic, D. P.; Ryan, K.; Basu, B.; Pakrashi, V.

    2016-01-01

    Although vibration monitoring is a popular method to monitor and assess dynamic structures, quantification of linearity or nonlinearity of the dynamic responses remains a challenging problem. We investigate the delay vector variance (DVV) method in this regard in a comprehensive manner to establish the degree to which a change in signal nonlinearity can be related to system nonlinearity and how a change in system parameters affects the nonlinearity in the dynamic response of the system. A wide range of theoretical situations are considered in this regard using a single degree of freedom (SDOF) system to obtain numerical benchmarks. A number of experiments are then carried out using a physical SDOF model in the laboratory. Finally, a composite wind turbine blade is tested for different excitations and the dynamic responses are measured at a number of points to extend the investigation to continuum structures. The dynamic responses were measured using accelerometers, strain gauges and a Laser Doppler vibrometer. This comprehensive study creates a numerical and experimental benchmark for structurally dynamical systems where output-only information is typically available, especially in the context of DVV. The study also allows for comparative analysis between different systems driven by the similar input. PMID:26909175

  18. Penerapan Model Multivariat Analisis of Variance dalam Mengukur Persepsi Destinasi Wisata

    Directory of Open Access Journals (Sweden)

    Robert Tang Herman

    2012-05-01

    Full Text Available The purpose of this research is to provide conceptual and infrastructure tools for Dinas Pariwisata DKI Jakarta to improve their capabilities for evaluating business performance based on market responsiveness. Capturing market responsiveness is the initial research to make industry mapping. Research steps started with secondary research to build data classification system. The second is primary research by collecting the data from market research. Data sources for secondary data were collected from Dinas Pariwisata DKI, while the primary data were collected from survey method using quetionaires addressed to the whole market. Then, analyze the data colleted with multivariate analysis of variance to develop the mapping. The result of cluster analysis distinguishes the potential market based on their responses to the industry classification, make the classification system, find the gaps and how important are they, and the another issue related to the role of the mapping system. So, this mapping system will help Dinas Pariwisata DKI to improve capabilities and the business performance based on the market responsiveness and, which is the potential market for each specific classification, know what their needs, wants and demand from that classification. This research contribution can be used to give the recommendation to Dinas Pariwisata DKI to deliver what market needs and wants to all the tourism place based on this classification resulting, to develop the market growth estimation; and for the long term is to improve the economic and market growth.

  19. Variance of Fluctuating Radar Echoes from Thermal Noise and Randomly Distributed Scatterers

    Directory of Open Access Journals (Sweden)

    Marco Gabella

    2014-02-01

    Full Text Available In several cases (e.g., thermal noise, weather echoes, …, the incoming signal to a radar receiver can be assumed to be Rayleigh distributed. When estimating the mean power from the inherently fluctuating Rayleigh signals, it is necessary to average either the echo power intensities or the echo logarithmic levels. Until now, it has been accepted that averaging the echo intensities provides smaller variance values, for the same number of independent samples. This has been known for decades as the implicit consequence of two works that were presented in the open literature. The present note deals with the deriving of analytical expressions of the variance of the two typical estimators of mean values of echo power, based on echo intensities and echo logarithmic levels. The derived expressions explicitly show that the variance associated to an average of the echo intensities is lower than that associated to an average of logarithmic levels. Consequently, it is better to average echo intensities rather than logarithms. With the availability of digital IF receivers, which facilitate the averaging of echo power, the result has a practical value. As a practical example, the variance obtained from two sets of noise samples, is compared with that predicted with the analytical expression derived in this note (Section 3: the measurements and theory show good agreement.

  20. A Test for Mean-Variance Efficiency of a given Portfolio under Restrictions

    NARCIS (Netherlands)

    G.T. Post (Thierry)

    2005-01-01

    textabstractThis study proposes a test for mean-variance efficiency of a given portfolio under general linear investment restrictions. We introduce a new definition of pricing error or “alpha” and as an efficiency measure we propose to use the largest positive alpha for any vertex of the portfolio p

  1. Genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations

    NARCIS (Netherlands)

    Mäki, K.; Groen, A.F.; Liinamo, A.E.; Ojala, M.

    2002-01-01

    The aims of this study were to assess genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations. The influence of time-dependent fixed effects in the model when estimating the genetic trends was also studied. Official hip and elbow dysplasia screening r

  2. Statistically Stable Estimates of Variance in Radioastronomical Observations as Tools for RFI Mitigation

    CERN Document Server

    Fridman, P A

    2010-01-01

    A selection of statistically stable (robust) algorithms for data variance calculating has been made. Their properties have been analyzed via computer simulation. These algorithms would be useful if adopted in radio astronomy observations in the presence of strong sporadic radio frequency interference (RFI). Several observational results have been presented here to demonstrate the effectiveness of these algorithms in RFI mitigation.

  3. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik;

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  4. The ensemble variance of pure-tone measurements in reverberation rooms

    DEFF Research Database (Denmark)

    Jacobsen, Finn; Molares, Alfonso Rodriguez

    2010-01-01

    statistics of measurements in such rooms. Below the Schroeder frequency, the relative variance is much larger, particularly if the source emits a pure-tone. The established theory for this frequency range is based on ensemble statistics of modal sums and requires knowledge of mode shapes and the distribution...

  5. Transport of temperature and humidity variance and covariance in the marine surface layer

    DEFF Research Database (Denmark)

    Sempreviva, A.M.; Højstrup, J.

    1998-01-01

    In this paper we address the budget of potential temperature T and moisture mixing ratio q variances as well as the q - T covariance budget. We focus on the vertical transport and study the quantities contained in these terms. Estimates of transport terms are rare and to the best of our knowledge...

  6. An evaluation of how downscaled climate data represents historical precipitation characteristics beyond the means and variances

    Science.gov (United States)

    Kusangaya, Samuel; Toucher, Michele L. Warburton; van Garderen, Emma Archer; Jewitt, Graham P. W.

    2016-09-01

    Precipitation is the main driver of the hydrological cycle. For climate change impact analysis, use of downscaled precipitation, amongst other factors, determines accuracy of modelled runoff. Precipitation is, however, considerably more difficult to model than temperature, largely due to its high spatial and temporal variability and its nonlinear nature. Due to such qualities of precipitation, a key challenge for water resources management is thus how to incorporate potentially significant but highly uncertain precipitation characteristics when modelling potential changes in climate for water resources management in order to support local management decisions. Research undertaken here was aimed at evaluating how downscaled climate data represented the underlying historical precipitation characteristics beyond the means and variances. Using the uMngeni Catchment in KwaZulu-Natal, South Africa as a case study, the occurrence of rainfall, rainfall threshold events and wet dry sequence was analysed for current climate (1961-1999). The number of rain days with daily rainfall > 1 mm, > 5 mm, > 10 mm, > 20 mm and > 40 mm for each of the 10 selected climate models was, compared to the number of rain days at 15 rain stations. Results from graphical and statistical analysis indicated that on a monthly basis rain days are over estimated for all climate models. Seasonally, the number of rain days were overestimated in autumn and winter and underestimated in summer and spring. The overall conclusion was that despite the advancement in downscaling and the improved spatial scale for a better representation of the climate variables, such as rainfall for use in hydrological impact studies, downscaled rainfall data still does not simulate well some important rainfall characteristics, such as number of rain days and wet-dry sequences. This is particularly critical, since, whilst for climatologists, means and variances might be simulated well in downscaled GCMs, for hydrologists

  7. Implementation of variance-reduction techniques for Monte Carlo nuclear logging calculations with neutron sources

    NARCIS (Netherlands)

    Maucec, M

    2005-01-01

    Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th

  8. Selection for uniformity in livestock by exploiting genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2008-01-01

    In some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters, b

  9. Rapid Divergence of Genetic Variance-Covariance Matrix within a Natural Population

    NARCIS (Netherlands)

    Doroszuk, A.; Wojewodzic, M.W.; Gort, G.; Kammenga, J.E.

    2008-01-01

    The matrix of genetic variances and covariances (G matrix) represents the genetic architecture of multiple traits sharing developmental and genetic processes and is central for predicting phenotypic evolution. These predictions require that the G matrix be stable. Yet the timescale and conditions pr

  10. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    Science.gov (United States)

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458

  11. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    Science.gov (United States)

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  12. Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System

    Science.gov (United States)

    Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.

    2016-06-01

    Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading

  13. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling.

    Science.gov (United States)

    Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng

    2016-01-01

    Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974

  14. Approximation to the Mean and Variance of Moments Method Estimate Due to Gamma Distribution

    International Nuclear Information System (INIS)

    In this paper, we shall consider the approximation to the mean and variance of moments method estimators due to gamma distribution by using Taylor series expansion approach.This approach showed that the estimators are asymptotically unbiased with mean square error approach zero as the sample size approach infinity.The theoretical approach assessed practically by using Monte-Carlo simulation

  15. EMPIRICAL BAYES TEST PROBLEMS OF VARIANCE COMPONENTS IN RANDOM EFFECTS MODEL

    Institute of Scientific and Technical Information of China (English)

    Wei Laisheng; Zhang Weiping

    2005-01-01

    Bayes decision rule of variance components for one-way random effects model is derived and empirical Bayes (EB) decision rules are constructed by kernel estimation method. Under suitable conditions, it is shown that the proposed EB decision rules are asymptotically optimal with convergence rates near O(n-1/2). Finally, an example concerning the main result is given.

  16. Fluctuation spectra and variances in convective turbulent boundary layers: A reevaluation of old models

    Science.gov (United States)

    Yaglom, A. M.

    1994-02-01

    Most of the existing theoretical models for statistical characteristics of turbulence in convective boundary layers are based on the similarity theory by Monin and Obukhov [Trudy Geofiz. Inst. Akad. Nauk SSSR 24(151), 163 (1954)], and its further refinements. A number of such models was recently reconsidered and partially compared with available data by Kader and Yaglom [J. Fluid Mech. 212, 637 (1990); Turbulence and Coherent Structures (Kluwer, Dordrecht, 1991), p. 387]. However, in these papers the data related to variances =σ2u and =σ2v of horizontal velocity components were not considered at all, and the data on horizontal velocity spectra Eu(k) and Ev(k) were used only for a restricted range of not too small wave numbers k. This is connected with findings by Kaimal et al. [Q. J. R. Meteorol. Soc. 98, 563 (1972)] and Panofsky et al. [Boundary-Layer Meteorol. 11, 355 (1977)], who showed that the Monin-Obukhov theory cannot be applied to velocity variance σ2u and σ2v and to spectra Eu(k) and Ev(k) in energy ranges of wave numbers. It is shown in this paper that a simple generalization of the traditional similarity theory, which takes into account the influence of large-scale organized structures, leads to new models of horizontal velocity variances and spectra, which describe the observed deviations of these characteristics from the predictions based on the Monin-Obukhov theory, and agree satisfactorily with the available data. The application of the same approach to the temperature spectrum and variance explains why the observed deviations of temperature spectrum in convective boundary layers from the Monin-Obukhov similarity does not lead to marked violations of the same similarity as applied to temperature variance =σ2t.

  17. Determining the bias and variance of a deterministic finger-tracking algorithm.

    Science.gov (United States)

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates. PMID:26174712

  18. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.

    1998-03-01

    `MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  19. Ensemble X-ray variability of Active Galactic Nuclei. II. Excess Variance and updated Structure Function

    CERN Document Server

    Vagnetti, F; Antonucci, M; Paolillo, M; Serafinelli, R

    2016-01-01

    Most investigations of the X-ray variability of active galactic nuclei (AGN) have been concentrated on the detailed analyses of individual, nearby sources. A relatively small number of studies have treated the ensemble behaviour of the more general AGN population in wider regions of the luminosity-redshift plane. We want to determine the ensemble variability properties of a rich AGN sample, called Multi-Epoch XMM Serendipitous AGN Sample (MEXSAS), extracted from the latest release of the XMM-Newton Serendipitous Source Catalogue, with redshift between 0.1 and 5, and X-ray luminosities, in the 0.5-4.5 keV band, between 10^{42} and 10^{47} erg/s. We caution on the use of the normalised excess variance (NXS), noting that it may lead to underestimate variability if used improperly. We use the structure function (SF), updating our previous analysis for a smaller sample. We propose a correction to the NXS variability estimator, taking account of the light-curve duration in the rest-frame, on the basis of the knowle...

  20. Meta分析模型权重变异比较及其与I2的相关性%Comparison of variance of weights in meta-analysis models and its correlation with I-square

    Institute of Scientific and Technical Information of China (English)

    石修权; 刘丹; 刘俊

    2011-01-01

    Objective:To explore the difference in variance of weights (sw2) between two models and its correlation with I-square of heterogeneity (I2) in meta-analyses. Methods:Weights and P were extracted from meta-analyses which were published in recent three years,then sw2 were computed.The difference in sw2 was compared between two models and the correlation was investigated between sw2 sw2 and I2. Results:sw2 in random-effect model was lower than that in fixed-effect model (t=2.739,P=0.015) ;and the correlation coefficient between sw2 and I2 was -0.505 and P-value was 0.039. Conclusion:sw2 in random-effect model is significantly lower than fixed-effect model and a negative correlation between sw2 and P exists.It is important to understand how to assign weight in two models,even in the study and suitable explanation of the results in meta-analyses if the reasons of sw2 difference and correlation between P and sw2 are thoroughly searched.%目的:探讨Meta分析中2种模型的权重变异(方差)的差异及其与异质性I2的关系.方法:检索近3年公开发表的Meta分析论文,提取每篇纳入文献的权重以及I2,并计算权重的变异度(方差值).比较不同模型下权重方差的差异并考察权重方差与I2的相关性.结果:随机效应模型的权重方差小于固定效应模型(t=2.739,P=0.015);权重方差与I2之间r=-0.505,P=0.039.结论:随机效应模型中原始研究的权重变异低于固定效应模型,且权重方差与I2间呈负相关.深入探讨权重变异差异及其与异质性的关系,对正确理解2种模型权重赋予的原则,甚至对Meta分析的学习和结果的合理解释均具有重要意义.

  1. On the stability and spatiotemporal variance distribution of salinity in the upper ocean

    Science.gov (United States)

    O'Kane, Terence J.; Monselesan, Didier P.; Maes, Christophe

    2016-06-01

    Despite recent advances in ocean observing arrays and satellite sensors, there remains great uncertainty in the large-scale spatial variations of upper ocean salinity on the interannual to decadal timescales. Consonant with both broad-scale surface warming and the amplification of the global hydrological cycle, observed global multidecadal salinity changes typically have focussed on the linear response to anthropogenic forcing but not on salinity variations due to changes in the static stability and or variability due to the intrinsic ocean or internal climate processes. Here, we examine the static stability and spatiotemporal variability of upper ocean salinity across a hierarchy of models and reanalyses. In particular, we partition the variance into time bands via application of singular spectral analysis, considering sea surface salinity (SSS), the Brunt Väisälä frequency (N2), and the ocean salinity stratification in terms of the stabilizing effect due to the haline part of N2 over the upper 500m. We identify regions of significant coherent SSS variability, either intrinsic to the ocean or in response to the interannually varying atmosphere. Based on consistency across models (CMIP5 and forced experiments) and reanalyses, we identify the stabilizing role of salinity in the tropics—typically associated with heavy precipitation and barrier layer formation, and the role of salinity in destabilizing upper ocean stratification in the subtropical regions where large-scale density compensation typically occurs.

  2. Ensemble X-ray variability of active galactic nuclei. II. Excess variance and updated structure function

    Science.gov (United States)

    Vagnetti, F.; Middei, R.; Antonucci, M.; Paolillo, M.; Serafinelli, R.

    2016-09-01

    Context. Most investigations of the X-ray variability of active galactic nuclei (AGN) have been concentrated on the detailed analyses of individual, nearby sources. A relatively small number of studies have treated the ensemble behaviour of the more general AGN population in wider regions of the luminosity-redshift plane. Aims: We want to determine the ensemble variability properties of a rich AGN sample, called Multi-Epoch XMM Serendipitous AGN Sample (MEXSAS), extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue (XMMSSC-DR5), with redshift between ~0.1 and ~5, and X-ray luminosities in the 0.5-4.5 keV band between ~1042 erg/s and ~1047 erg/s. Methods: We urge caution on the use of the normalised excess variance (NXS), noting that it may lead to underestimate variability if used improperly. We use the structure function (SF), updating our previous analysis for a smaller sample. We propose a correction to the NXS variability estimator, taking account of the light curve duration in the rest frame on the basis of the knowledge of the variability behaviour gained by SF studies. Results: We find an ensemble increase of the X-ray variability with the rest-frame time lag τ, given by SF ∝ τ0.12. We confirm an inverse dependence on the X-ray luminosity, approximately as SF ∝ LX-0.19. We analyse the SF in different X-ray bands, finding a dependence of the variability on the frequency as SF ∝ ν-0.15, corresponding to a so-called softer when brighter trend. In turn, this dependence allows us to parametrically correct the variability estimated in observer-frame bands to that in the rest frame, resulting in a moderate (≲15%) shift upwards (V-correction). Conclusions: Ensemble X-ray variability of AGNs is best described by the structure function. An improper use of the normalised excess variance may lead to an underestimate of the intrinsic variability, so that appropriate corrections to the data or the models must be applied to prevent

  3. Spatiotemporal characterization of Ensemble Prediction Systems – the Mean-Variance of Logarithms (MVL diagram

    Directory of Open Access Journals (Sweden)

    J. Fernández

    2008-02-01

    Full Text Available We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.

  4. Stud identity among female-born youth of color: joint conceptualizations of gender variance and same-sex sexuality.

    Science.gov (United States)

    Kuper, Laura E; Wright, Laurel; Mustanski, Brian

    2014-01-01

    Little is known about the experiences of individuals who may fall under the umbrella of "transgender" but do not transition medically and/or socially. The impact of the increasingly widespread use of the term "transgender" itself also remains unclear. The authors present narratives from four female-born youth of color who report a history of identifying as a "stud." Through analysis of their processes of identity signification, the authors demonstrate how stud identity fuses aspects of gender and sexuality while providing an alternate way of making meaning of gender variance. As such, this identity has important implications for research and organizing centered on an LGBT-based identity framework.

  5. Computation of Confidence Limits for Linear Functions of the Normal Mean and Variance

    Energy Technology Data Exchange (ETDEWEB)

    Land, C.E.; Lyon, B.F.

    1999-09-01

    A program is described that calculates exact and optimal (uniformly most accurate unbiased) confidence limits for linear functions of the normal mean and variance. The program can therefore also be used to calculate confidence limits for monotone transformations of such functions (e.g., lognormal means). The accuracy of the program has been thoroughly evaluated in terms of coverage probabilities for a wide range of parameter values.

  6. On the origins of signal variance in FMRI of the human midbrain at high field.

    Directory of Open Access Journals (Sweden)

    Robert L Barry

    Full Text Available Functional Magnetic Resonance Imaging (fMRI in the midbrain at 7 Tesla suffers from unexpectedly low temporal signal to noise ratio (TSNR compared to other brain regions. Various methodologies were used in this study to quantitatively identify causes of the noise and signal differences in midbrain fMRI data. The influence of physiological noise sources was examined using RETROICOR, phase regression analysis, and power spectral analyses of contributions in the respiratory and cardiac frequency ranges. The impact of between-shot phase shifts in 3-D multi-shot sequences was tested using a one-dimensional (1-D phase navigator approach. Additionally, the effects of shared noise influences between regions that were temporally, but not functionally, correlated with the midbrain (adjacent white matter and anterior cerebellum were investigated via analyses with regressors of 'no interest'. These attempts to reduce noise did not improve the overall TSNR in the midbrain. In addition, the steady state signal and noise were measured in the midbrain and the visual cortex for resting state data. We observed comparable steady state signals from both the midbrain and the cortex. However, the noise was 2-3 times higher in the midbrain relative to the cortex, confirming that the low TSNR in the midbrain was not due to low signal but rather a result of large signal variance. These temporal variations did not behave as known physiological or other noise sources, and were not mitigated by conventional strategies. Upon further investigation, resting state functional connectivity analysis in the midbrain showed strong intrinsic fluctuations between homologous midbrain regions. These data suggest that the low TSNR in the midbrain may originate from larger signal fluctuations arising from functional connectivity compared to cortex, rather than simply reflecting physiological noise.

  7. Effect of Number of Days between Semen Sampling on Variance Heterogeneity of Semen Concentration of Young Simmental Bulls

    Directory of Open Access Journals (Sweden)

    Kristijan Grubišić

    2003-06-01

    In order to analyze heterogeneity of variance, four data sets with two days periods (i.e. two and three; three and four; four and five; and five and six days between semen samplings were derived. Similarly, three data sets with three days periods between semen samplings were derived. Variance and covariance components and associated heritabilities for such defined data sets were estimated by Restricted Maximum Likelihood from a set of single-trait animal models. Fixed effects were defined as birth year x season and number of days between collections, and animal effect was defined as a random effect. The heritability estimates ranged between 0.01 to 0.08. Days between collections influenced variance heterogeneity. An increase of days between collections increased additive and permanent environment variance, decreased error variance, thus the estimation of heritability was improved.

  8. Simultaneous estimation of noise variance and number of peaks in Bayesian spectral deconvolution

    CERN Document Server

    Tokuda, Satoru; Okada, Masato

    2016-01-01

    Heuristic identification of peaks from noisy complex spectra often leads to misunderstanding physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multi-peak spectra into single peaks statistically and is constructed in two steps. The first step is estimating both noise variance and number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multi-peak models are nonlinear and hierarchical. Our framework enables escaping from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy. We discuss a simulation demonstrating how efficient our framework is and show that estimating both noise variance and number of peaks prevents overfitting, overpenalizing, and misun...

  9. Comparison of amplitude-decorrelation, speckle-variance and phase-variance OCT angiography methods for imaging the human retina and choroid.

    Science.gov (United States)

    Gorczynska, Iwona; Migacz, Justin V; Zawadzki, Robert J; Capps, Arlie G; Werner, John S

    2016-03-01

    We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598

  10. Comparison of MINQUE and Simple Estimate of the Error Variance in the General Linear Models

    Institute of Scientific and Technical Information of China (English)

    Song-gui Wang; Mi-xia Wu; Wei-qing Ma

    2003-01-01

    Comparison is made between the MINQUE and simple estimate of the error variance in the normal linear model under the mean square errors criterion, where the model matrix need not have full rank and the dispersion matrix can be singular. Our results show that any one of both estimates cannot be always superior to the other. Some sufficient criteria for any one of them to be better than the other are established. Some interesting relations between these two estimates are also given.

  11. Foraging trait (co)variances in stickleback evolve deterministically and do not predict trajectories of adaptive diversification.

    Science.gov (United States)

    Berner, Daniel; Stutz, William E; Bolnick, Daniel I

    2010-08-01

    How does natural selection shape the structure of variance and covariance among multiple traits, and how do (co)variances influence trajectories of adaptive diversification? We investigate these pivotal but open questions by comparing phenotypic (co)variances among multiple morphological traits across 18 derived lake-dwelling populations of threespine stickleback, and their marine ancestor. Divergence in (co)variance structure among populations is striking and primarily attributable to shifts in the variance of a single key foraging trait (gill raker length). We then relate this divergence to an ecological selection proxy, to population divergence in trait means, and to the magnitude of sexual dimorphism within populations. This allows us to infer that evolution in (co)variances is linked to variation among habitats in the strength of resource-mediated disruptive selection. We further find that adaptive diversification in trait means among populations has primarily involved shifts in gill raker length. The direction of evolutionary trajectories is unrelated to the major axes of ancestral trait (co)variance. Our study demonstrates that natural selection drives both means and (co)variances deterministically in stickleback, and strongly challenges the view that the (co)variance structure biases the direction of adaptive diversification predictably even over moderate time spans.

  12. Foraging trait (co)variances in stickleback evolve deterministically and do not predict trajectories of adaptive diversification.

    Science.gov (United States)

    Berner, Daniel; Stutz, William E; Bolnick, Daniel I

    2010-08-01

    How does natural selection shape the structure of variance and covariance among multiple traits, and how do (co)variances influence trajectories of adaptive diversification? We investigate these pivotal but open questions by comparing phenotypic (co)variances among multiple morphological traits across 18 derived lake-dwelling populations of threespine stickleback, and their marine ancestor. Divergence in (co)variance structure among populations is striking and primarily attributable to shifts in the variance of a single key foraging trait (gill raker length). We then relate this divergence to an ecological selection proxy, to population divergence in trait means, and to the magnitude of sexual dimorphism within populations. This allows us to infer that evolution in (co)variances is linked to variation among habitats in the strength of resource-mediated disruptive selection. We further find that adaptive diversification in trait means among populations has primarily involved shifts in gill raker length. The direction of evolutionary trajectories is unrelated to the major axes of ancestral trait (co)variance. Our study demonstrates that natural selection drives both means and (co)variances deterministically in stickleback, and strongly challenges the view that the (co)variance structure biases the direction of adaptive diversification predictably even over moderate time spans. PMID:20199566

  13. Constraining the epoch of reionization with the variance statistic: simulations of the LOFAR case

    CERN Document Server

    Patil, Ajinkya H; Chapman, Emma; Jelić, Vibor; Harker, Geraint; Abdalla, Filipe B; Asad, Khan M B; Bernardi, Gianni; Brentjens, Michiel A; de Bruyn, A G; Bus, Sander; Ciardi, Benedetta; Daiboo, Soobash; Fernandez, Elizabeth R; Ghosh, Abhik; Jensen, Hannes; Kazemi, Sanaz; Koopmans, Léon V E; Labropoulos, Panagiotis; Mevius, Maaijke; Martinez, Oscar; Mellema, Garrelt; Offringa, Andre R; Pandey, Vishhambhar N; Schaye, Joop; Thomas, Rajat M; Vedantham, Harish K; Veligatla, Vamsikrishna; Wijnholds, Stefan J; Yatawatta, Sarod

    2014-01-01

    Several experiments are underway to detect the cosmic redshifted 21-cm signal from neutral hydrogen from the Epoch of Reionization (EoR). Due to their very low signal-to-noise ratio, these observations aim for a statistical detection of the signal by measuring its power spectrum. We investigate the extraction of the variance of the signal as a first step towards detecting and constraining the global history of the EoR. Signal variance is the integral of the signal's power spectrum, and it is expected to be measured with a high significance. We demonstrate this through results from a realistic, end-to-end simulation and parameter estimation pipeline developed for the Low Frequency Array (LOFAR)-EoR experiment. We find the variance statistic to be a promising tool and use it to demonstrate that LOFAR should be able to detect the EoR in 600 h of integration. Additionally, the redshift ($z_r$) and duration ($\\Delta z$) of reionization can be constrained assuming a parametrization. We use an EoR simulation of $z_r...

  14. Evaluation of area of review variance opportunities for the East Texas field. Annual report

    Energy Technology Data Exchange (ETDEWEB)

    Warner, D.L.; Koederitz, L.F.; Laudon, R.C.; Dunn-Norman, S.

    1995-05-01

    The East Texas oil field, discovered in 1930 and located principally in Gregg and Rusk Counties, is the largest oil field in the conterminous United States. Nearly 33,000 wells are known to have been drilled in the field. The field has been undergoing water injection for pressure maintenance since 1938. As of today, 104 Class II salt-water disposal wells, operated by the East Texas Salt Water Disposal Company, are returning all produced water to the Woodbine producing reservoir. About 69 of the presently existing wells have not been subjected to U.S. Environmental Protection Agency Area-of-Review (AOR) requirements. A study has been carried out of opportunities for variance from AORs for these existing wells and for new wells that will be constructed in the future. The study has been based upon a variance methodology developed at the University of Missouri-Rolla under sponsorship of the American Petroleum Institute and in coordination with the Ground Water Protection Council. The principal technical objective of the study was to determine if reservoir pressure in the Woodbine producing reservoir is sufficiently low so that flow of salt-water from the Woodbine into the Carrizo-Wilcox ground water aquifer is precluded. The study has shown that the Woodbine reservoir is currently underpressured relative to the Carrizo-Wilcox and will remain so over the next 20 years. This information provides a logical basis for a variance for the field from performing AORs.

  15. The effect of errors-in-variables on variance component estimation

    Science.gov (United States)

    Xu, Peiliang

    2016-08-01

    Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain

  16. Relative variances of the cadence frequency of cycling under two differential saddle heights.

    Science.gov (United States)

    Chang, Wen-Dien; Fan Chiang, Chin-Yun; Lai, Ping-Tung; Lee, Chia-Lun; Fang, Sz-Ming

    2016-01-01

    [Purpose] Bicycle saddle height is a critical factor for cycling performance and injury prevention. The present study compared the variance in cadence frequency after exercise fatigue between saddle heights with 25° and 35° knee flexion. [Methods] Two saddle heights, which were determined by setting the pedal at the bottom dead point with 35° and 25° knee flexion, were used for testing. The relative variances of the cadence frequency were calculated at the end of a 5-minute warm-up period and 5 minutes after inducing exercise fatigue. Comparison of the absolute values of the cadence frequency under the two saddle heights revealed a difference in pedaling efficiency. [Results] Five minutes after inducing exercise fatigue, the relative variances of the cadence frequency for the saddle height with 35° knee flexion was higher than that for the saddle height with 25° knee flexion. [Conclusion] The current finding demonstrated that a saddle height with 25° knee flexion is more appropriate for cyclists than a saddle height with 35° knee flexion. PMID:27065522

  17. A comparison of two methods for detecting abrupt changes in the variance of climatic time series

    CERN Document Server

    Rodionov, Sergei

    2016-01-01

    Two methods for detecting abrupt shifts in the variance, Integrated Cumulative Sum of Squares (ICSS) and Sequential Regime Shift Detector (SRSD), have been compared on both synthetic and observed time series. In Monte Carlo experiments, SRSD outperformed ICSS in the overwhelming majority of the modelled scenarios with different sequences of variance regimes. The SRSD advantage was particularly apparent in the case of outliers in the series. When tested on climatic time series, in most cases both methods detected the same change points in the longer series (252-787 monthly values). The only exception was the Arctic Ocean SST series, when ICSS found one extra change point that appeared to be spurious. As for the shorter time series (66-136 yearly values), ICSS failed to detect any change points even when the variance doubled or tripled from one regime to another. For these time series, SRSD is recommended. Interestingly, all the climatic time series tested, from the Arctic to the Tropics, had one thing in commo...

  18. Avaliação de quatro alternativas de análise de experimentos em látice quadrado, quanto à estimação de componentes de variância Evaluation of four alternatives of analysis of experiments in square lattice, with emphasis on estimate of variance component

    Directory of Open Access Journals (Sweden)

    HEYDER DINIZ SILVA

    2000-01-01

    Full Text Available Estudou-se, no presente trabalho, a eficiência das seguintes alternativas de análise de experimentos realizados em látice quanto à precisão na estimação de componentes de variância, através da simulação computacional de dados: i análise intrablocos do látice com tratamentos ajustados (primeira análise; ii análise do látice em blocos casualizados completos (segunda análise; iii análise intrablocos do látice com tratamentos não-ajustados (terceira análise; iv análise do látice como blocos casualizados completos, utilizando as médias ajustadas dos tratamentos, obtidas a partir da análise com recuperação da informação interblocos, tendo como quadrado médio do resíduo a variância efetiva média dessa análise do látice (quarta análise. Os resultados obtidos mostram que se deve utilizar o modelo de análise intrablocos de experimentos em látice para se estimarem componentes de variância sempre que a eficiência relativa do delineamento em látice, em relação ao delineamento em Blocos Completos Casualizados, for superior a 100% e, em caso contrário, deve-se optar pelo modelo de análise em Blocos Casualizados Completos. A quarta alternativa de análise não deve ser recomendada em qualquer das duas situações.The efficiency of fur alternatives of analysis of experiments in square lattice, related to the estimation of variance components, was studied through computational simulation of data: i intrablock analysis of the lattice with adjusted treatments (first analysis; ii lattices analysis as a randomized complete blocks design (second analysis; iii; intrablock analysis of the lattice with non-adjusted treatments (third analysis; iv lattice analysis as a randomized complete blocks design, using the adjusted means of treatments, obtained through the analysis of lattice with recuperation of interblocks information, having as the residual mean square, the average effective variance of this same lattice analysis

  19. Label-free imaging of developing vasculature in zebrafish with phase variance optical coherence microscopy

    Science.gov (United States)

    Chen, Yu; Fingler, Jeff; Trinh, Le A.; Fraser, Scott E.

    2016-03-01

    A phase variance optical coherence microscope (pvOCM) has been created to visualize blood flow in the vasculature of zebrafish embryos, without using exogenous labels. The pvOCM imaging system has axial and lateral resolutions of 2 μm in tissue, and imaging depth of more than 100 μm. Imaging of 2-5 days post-fertilization zebrafish embryos identified the detailed structures of somites, spinal cord, gut and notochord based on intensity contrast. Visualization of the blood flow in the aorta, veins and intersegmental vessels was achieved with phase variance contrast. The pvOCM vasculature images were confirmed with corresponding fluorescence microscopy of a zebrafish transgene that labels the vasculature with green fluorescent protein. The pvOCM images also revealed functional information of the blood flow activities that is crucial for the study of vascular development.

  20. Calibration of local volatility using the local and implied instantaneous variance

    OpenAIRE

    Turinici, Gabriel

    2009-01-01

    International audience We document the calibration of the local volatility in terms of local and implied instantaneous variances; we first explore the theoretical properties of the method for a particular class of volatilities. We confirm the theoretical results through a numerical procedure which uses a Gauss-Newton style approximation of the Hessian in the framework of a sequential quadratic programming (SQP) approach. The procedure performs well on benchmarks from the literature and on ...

  1. Impulse Noise Filtering Using Robust Pixel-Wise S-Estimate of Variance

    Directory of Open Access Journals (Sweden)

    Nemanja I. Petrović

    2010-01-01

    Full Text Available A novel method for impulse noise suppression in images, based on the pixel-wise S-estimator, is introduced. The S-estimator is an alternative for the well-known robust estimate of variance MAD, which does not require a location estimate and hence is more appropriate for asymmetric distributions, frequently encountered in transient regions of the image. The proposed computationally efficient modification of a robust S-estimator of variance is successfully utilized in iterative scheme for impulse noise filtering. Another novelty is that the proposed iterative algorithm has automatic stopping criteria, also based on the pixel-wise S-estimator. Performances of the proposed filter are independent of the image content or noise concentration. The proposed filter outperforms all state-of-the-art filters included in a large comparison, both objectively (in terms of PSNR and MSSIM and subjectively.

  2. An Investigation of the Sequential Sampling Method for Crossdocking Simulation Output Variance Reduction

    CERN Document Server

    Adewunmi, Adrian; Byrne, Mike

    2008-01-01

    This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.

  3. Variance estimation of the Gini index: revisiting a result several times published

    OpenAIRE

    Langel, Matti; Tillé, Yves

    2016-01-01

    Since Corrado Gini suggested the index that bears his name as a way of measuring inequality, the computation of variance of the Gini index has been subject to numerous publications. We survey a large part of the literature related to the topic and show that the same results, as well as the same errors, have been republished several times, often with a clear lack of reference to previous work. Whereas existing literature on the subject is very fragmented, we regroup references from various ...

  4. A NOVEL MULTICLASS SUPPORT VECTOR MACHINE ALGORITHM USING MEAN REVERSION AND COEFFICIENT OF VARIANCE

    Directory of Open Access Journals (Sweden)

    Bhusana Premanode

    2013-01-01

    Full Text Available Inaccuracy of a kernel function used in Support Vector Machine (SVM can be found when simulated with nonlinear and stationary datasets. To minimise the error, we propose a new multiclass SVM model using mean reversion and coefficient of variance algorithm to partition and classify imbalance in datasets. By introducing a series of test statistic, simulations of the proposed algorithm outperformed the performance of the SVM model without using multiclass SVM model.

  5. Efficient Option Pricing in Crisis Based on Dynamic Elasticity of Variance Model

    Directory of Open Access Journals (Sweden)

    Congyin Fan

    2016-01-01

    Full Text Available Market crashes often appear in daily trading activities and such instantaneous occurring events would affect the stock prices greatly. In an unstable market, the volatility of financial assets changes sharply, which leads to the fact that classical option pricing models with constant volatility coefficient, even stochastic volatility term, are not accurate. To overcome this problem, in this paper we put forward a dynamic elasticity of variance (DEV model by extending the classical constant elasticity of variance (CEV model. Further, the partial differential equation (PDE for the prices of European call option is derived by using risk neutral pricing principle and the numerical solution of the PDE is calculated by the Crank-Nicolson scheme. In addition, Kalman filtering method is employed to estimate the volatility term of our model. Our main finding is that the prices of European call option under our model are more accurate than those calculated by Black-Scholes model and CEV model in financial crashes.

  6. Modelling Changes in the Unconditional Variance of Long Stock Return Series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011......). The latter component is modelled by incorporating smooth changes so that the unconditional variance is allowed to evolve slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building...... show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all...

  7. SIMULATION STUDY OF GENERALIZED MINIMUM VARIANCE CONTROL FOR AN EXTRACTION TURBINE

    Institute of Scientific and Technical Information of China (English)

    Shi Xiaoping

    2003-01-01

    In an extraction turbine, pressure of the extracted steam and rotate speed of the rotor are two important controlled quantities. The traditional linear state feedback control method is not perfect enough to control the two quantities accurately because of existence of nonlinearity and coupling. A generalized minimum variance control method is studied for an extraction turbine. Firstly, a nonlinear mathematical model of the control system about the two quantities is transformed into a linear system with two white noises. Secondly, a generalized minimum variance control law is applied to the system.A comparative simulation is done. The simulation results indicate that precision and dynamic quality of the regulating system under the new control law are both better than those under the state feedback control law.

  8. Increasing the genetic variance of rice protein through mutation breeding techniques

    International Nuclear Information System (INIS)

    Recommended rice variety in Indonesia, Pelita I/1 was treated with gamma rays at the doses of 20 krad, 30 krad, and 40 krad. The seeds were also treated with EMS 1%. In M2 generation, the protein content of seeds from the visible mutants and from the normal looking plants were analyzed by DBC method. No significant increase in the genetic variance was found on the samples treated with 20 krad gamma, and on the normal looking plants treated by EMS 1%. The mean value of the treated samples were mostly significant decrease compared with the mean value of the protein distribution in untreated samples (control). Since significant increase in genetic variance was also found in M2 normal looking plants - treated with gamma at the doses of 30 krad and 40 krad -selection of protein among these materials could be more valuable. (author)

  9. Variance and shift of transition arrays for electric and magnetic multipole transitions

    Science.gov (United States)

    Krief, Menahem; Feigel, Alexander

    2015-12-01

    Generalized analytical expressions for the two-electron relativistic Unresolved-Transition-Array (UTA) energy variance and shift for electric and magnetic transitions of general multipole order are presented. The revised expressions are shown to agree with the exact moments calculated directly from the energy levels of two-electron configurations. We show that for electric transitions of even multipole order and for magnetic transitions, the available expressions in the literature, which are implemented in widely used atomic codes, are incorrect. We suggest an alternative method for the calculation of the UTA energy variance and shift by using the analytical expressions for the two-electron energy levels and line-strengths. The method is much more efficient and simple than the use of the traditional lengthy analytic expressions. Finally, the effect of UTA widths on Super-Transition-Array (STA) spectral opacity is shown for several examples.

  10. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    Science.gov (United States)

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.

  11. Building Confidence Intervals with Block Bootstraps for the Variance Ratio Test of Predictability

    OpenAIRE

    Eduardo José Araújo Lima; Benjamin Miranda Tabak

    2007-01-01

    This paper compares different versions of the multiple variance ratio test based on bootstrap techniques for the construction of empirical distributions. It also analyzes the crucial issue of selecting optimal block sizes when block bootstrap procedures are used, by applying the methods developed by Hall et al. (1995) and by Politis and White (2004). By comparing the results of the different methods using Monte Carlo simulations, we conclude that methodologies using block bootstrap methods pr...

  12. Conversations across Meaning Variance

    Science.gov (United States)

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  13. Sample correlations of infinite variance time series models: an empirical and theoretical study

    Directory of Open Access Journals (Sweden)

    Jason Cohen

    1998-01-01

    Full Text Available When the elements of a stationary ergodic time series have finite variance the sample correlation function converges (with probability 1 to the theoretical correlation function. What happens in the case where the variance is infinite? In certain cases, the sample correlation function converges in probability to a constant, but not always. If within a class of heavy tailed time series the sample correlation functions do not converge to a constant, then more care must be taken in making inferences and in model selection on the basis of sample autocorrelations. We experimented with simulating various heavy tailed stationary sequences in an attempt to understand what causes the sample correlation function to converge or not to converge to a constant. In two new cases, namely the sum of two independent moving averages and a random permutation scheme, we are able to provide theoretical explanations for a random limit of the sample autocorrelation function as the sample grows.

  14. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    Science.gov (United States)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  15. On efficiency of mean-variance based portfolio selection in DC pension schemes

    OpenAIRE

    Elena Vigna

    2010-01-01

    We consider the portfolio selection problem in the accumulation phase of a defined contribution (DC) pension scheme. We solve the mean-variance portfolio selection problem using the embedding technique pioneered by Zhou and Li (2000) and show that it is equivalent to a target-based optimization problem, consisting in the minimization of a quadratic loss function. We support the use of the target-based approach in DC pension funds for three reasons. Firstly, it transforms the difficult problem...

  16. The impact of order variance amplification/dampening on supply chain performance

    OpenAIRE

    Boute, Robert; Disney, S; Lambrecht, Marc; Van Houdt, B

    2006-01-01

    We consider a two echelon supply chain where a single retailer holds an inventory of finished goods to satisfy an i.i.d. customer demand, and a single manufacturer produces the retailer's replenishment orders on a make-to-order basis. The objective of this paper is to analyse the impact of the retailer's replenishment policy (order variance amplification/dampening) on supply chain performance. Inventory control policies at the retailer often transmit customer demand variability to the manufac...

  17. Forecasting the variance and return of Mexican financial series with symmetric GARCH models

    Directory of Open Access Journals (Sweden)

    Fátima Irina VILLALBA PADILLA

    2013-03-01

    Full Text Available The present research shows the application of the generalized autoregresive conditional heteroskedasticity models (GARCH in order to forecast the variance and return of the IPC, the EMBI, the weighted-average government funding rate, the fix exchange rate and the Mexican oil reference, as important tools for investment decisions. Forecasts in-sample and out-of-sample are performed. The covered period involves from 2005 to 2011.

  18. Variance in system dynamics and agent based modelling using the SIR model of infectious diseases

    OpenAIRE

    Ahmed, Aslam; Greensmith, Julie; Aickelin, Uwe

    2012-01-01

    Classical deterministic simulations of epidemiological processes, such as those based on System Dynamics, produce a single result based on a fixed set of input parameters with no variance between simulations. Input parameters are subsequently modified on these simulations using Monte-Carlo methods, to understand how changes in the input parameters affect the spread of results for the simulation. Agent Based simulations are able to produce different output results on each run based on knowledg...

  19. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. PMID:26081782

  20. An R package "VariABEL" for genome-wide searching of potentially interacting loci by testing genotypic variance heterogeneity

    Directory of Open Access Journals (Sweden)

    Struchalin Maksim V

    2012-01-01

    Full Text Available Abstract Background Hundreds of new loci have been discovered by genome-wide association studies of human traits. These studies mostly focused on associations between single locus and a trait. Interactions between genes and between genes and environmental factors are of interest as they can improve our understanding of the genetic background underlying complex traits. Genome-wide testing of complex genetic models is a computationally demanding task. Moreover, testing of such models leads to multiple comparison problems that reduce the probability of new findings. Assuming that the genetic model underlying a complex trait can include hundreds of genes and environmental factors, testing of these models in genome-wide association studies represent substantial difficulties. We and Pare with colleagues (2010 developed a method allowing to overcome such difficulties. The method is based on the fact that loci which are involved in interactions can show genotypic variance heterogeneity of a trait. Genome-wide testing of such heterogeneity can be a fast scanning approach which can point to the interacting genetic variants. Results In this work we present a new method, SVLM, allowing for variance heterogeneity analysis of imputed genetic variation. Type I error and power of this test are investigated and contracted with these of the Levene's test. We also present an R package, VariABEL, implementing existing and newly developed tests. Conclusions Variance heterogeneity analysis is a promising method for detection of potentially interacting loci. New method and software package developed in this work will facilitate such analysis in genome-wide context.

  1. A note on the asymptotic distribution of likelihood ratio tests to test variance components.

    Science.gov (United States)

    Visscher, Peter M

    2006-08-01

    When using maximum likelihood methods to estimate genetic and environmental components of (co)variance, it is common to test hypotheses using likelihood ratio tests, since such tests have desirable asymptotic properties. In particular, the standard likelihood ratio test statistic is assumed asymptotically to follow a chi2 distribution with degrees of freedom equal to the number of parameters tested. Using the relationship between least squares and maximum likelihood estimators for balanced designs, it is shown why the asymptotic distribution of the likelihood ratio test for variance components does not follow a chi2 distribution with degrees of freedom equal to the number of parameters tested when the null hypothesis is true. Instead, the distribution of the likelihood ratio test is a mixture of chi2 distributions with different degrees of freedom. Implications for testing variance components in twin designs and for quantitative trait loci mapping are discussed. The appropriate distribution of the likelihood ratio test statistic should be used in hypothesis testing and model selection. PMID:16899155

  2. Increasing genetic variance of body mass index during the Swedish obesity epidemic

    DEFF Research Database (Denmark)

    Rokholm, Benjamin; Silventoinen, Karri; Tynelius, Per;

    2011-01-01

    There is no doubt that the dramatic worldwide increase in obesity prevalence is due to changes in environmental factors. However, twin and family studies suggest that genetic differences are responsible for the major part of the variation in adiposity within populations. Recent studies show that ...... that the genetic effects on body mass index (BMI) may be stronger when combined with presumed risk factors for obesity. We tested the hypothesis that the genetic variance of BMI has increased during the obesity epidemic....

  3. Testing an Extended Theoretical Framework to Explain Variance in Use of a Public Health Information System

    OpenAIRE

    Wangia, Victoria

    2012-01-01

    Objectives: This study examined determinants of using an immunization registry, explaining the variance in use. The technology acceptance model (TAM) was extended with contextual factors (contextualized TAM) to test hypotheses about immunization registry usage. Commitment to change, perceived usefulness, perceived ease of use, job-task changes, subjective norm, computer self-efficacy and system interface characteristics were hypothesized to affect usage. Method: The quantitative study was a p...

  4. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    Science.gov (United States)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  5. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 2. Assessment of MCNP Statistical Analysis of keff Eigenvalue Convergence with an Analytical Criticality Verification Test Set

    International Nuclear Information System (INIS)

    Monte Carlo simulations of nuclear criticality eigenvalue problems are often performed by general purpose radiation transport codes such as MCNP. MCNP performs detailed statistical analysis of the criticality calculation and provides feedback to the user with warning messages, tables, and graphs. The purpose of the analysis is to provide the user with sufficient information to assess spatial convergence of the eigenfunction and thus the validity of the criticality calculation. As a test of this statistical analysis package in MCNP, analytic criticality verification benchmark problems have been used for the first time to assess the performance of the criticality convergence tests in MCNP. The MCNP statistical analysis capability has been recently assessed using the 75 multigroup criticality verification analytic problem test set. MCNP was verified with these problems at the 10-4 to 10-5 statistical error level using 40 000 histories per cycle and 2000 active cycles. In all cases, the final boxed combined keff answer was given with the standard deviation and three confidence intervals that contained the analytic keff. To test the effectiveness of the statistical analysis checks in identifying poor eigenfunction convergence, ten problems from the test set were deliberately run incorrectly using 1000 histories per cycle, 200 active cycles, and 10 inactive cycles. Six problems with large dominance ratios were chosen from the test set because they do not achieve the normal spatial mode in the beginning of the calculation. To further stress the convergence tests, these problems were also started with an initial fission source point 1 cm from the boundary thus increasing the likelihood of a poorly converged initial fission source distribution. The final combined keff confidence intervals for these deliberately ill-posed problems did not include the analytic keff value. In no case did a bad confidence interval go undetected. Warning messages were given signaling that the

  6. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    Science.gov (United States)

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).

  7. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    Science.gov (United States)

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV). PMID:26800535

  8. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    Science.gov (United States)

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results.

  9. Statistics of Dark Matter Substructure: III. Halo-to-Halo Variance

    CERN Document Server

    Jiang, Fangzhou

    2016-01-01

    We present a study of unprecedented statistical power regarding the halo-to-halo variance of dark matter substructure. Using a combination of N-body simulations and a semi-analytical model, we investigate the variance in subhalo mass fractions and subhalo occupation numbers, with an emphasis on how these statistics scale with halo formation time. We demonstrate that the subhalo mass fraction, f_sub, is mainly a function of halo formation time, with earlier forming haloes having less substructure. At fixed formation redshift, the average f_sub is virtually independent of halo mass, and the mass dependence of f_sub is therefore mainly a manifestation of more massive haloes assembling later. We compare observational constraints on f_sub from gravitational lensing to our model predictions and simulation results. Although the inferred f_sub are substantially higher than the median LCDM predictions, they fall within the 95th percentile due to halo-to-halo variance. We show that while the halo occupation distributio...

  10. CONSTANT ELASTICITY OF VARIANCE MODEL AND ANALYTICAL STRATEGIES FOR ANNUITY CONTRACTS

    Institute of Scientific and Technical Information of China (English)

    XIAO Jian-wu; YIN Shao-hua; QIN Cheng-lin

    2006-01-01

    The constant elasticity of variance(CEV) model was constructed to study a defined contribution pension plan where benefits were paid by annuity. It also presents the process that the Legendre transform and dual theory can be applied to find an optimal investment policy during a participant's whole life in the pension plan. Finally, two explicit solutions to exponential utility function in the two different periods (before and after retirement) are revealed. Hence, the optimal investment strategies in the two periods are obtained.

  11. MINIMUM VARIANCE HEDGING AND THE ENCOMPASSING PRINCIPLE: ASSESSING THE EFFECTIVENESS OF FUTURES HEDGES

    OpenAIRE

    Manfredo, Mark R.; Sanders, Dwight R.

    2003-01-01

    An empirical methodology is developed for statistically testing the hedging effectiveness among competing futures contracts. The presented methodology is based on the encompassing principle, widely used in the forecasting literature, and applied here to minimum variance hedging regressions. Intuitively, the test is based on an alternative futures contract's ability to reduce residual basis risk by offering either diversification or a smaller absolute level of basis risk than a preferred futur...

  12. Computing Variances from Data with Complex Sampling Designs: A Comparison of Stata and SPSS

    OpenAIRE

    Alicia C. Dowd; Michael B. Duggan

    2001-01-01

    Most of the data sets available through the National Center for Education Statistics (NCES) are based on complex sampling designs involving multi-stage sampling, stratification, and clustering. These complex designs require appropriate statistical techniques to calculate the variance. Stata employs specialized methods that appropriately adjust for the complex designs, while SPSS does not. Researchers using SPSS must obtain the design effects through NCES and adjust the standard errors generat...

  13. 基于V/S分析的河川径流长记忆性研究%Study on long memory of river runoff based on rescaled variance analysis

    Institute of Scientific and Technical Information of China (English)

    孙东永; 黄强; 王义民

    2011-01-01

    针对河川径流的复杂波动性,将V/S分析引入到河川径流中进行长记忆性研究。通过V/S分析计算黄河上游兰州站和贵德站年径流序列的Hurst指数,与R/S分析进行比较,并进行稳定性与短期相关性检验,结果表明:兰州站、贵德站年径流序列V/S分析的Hurst指数均大于0.5,表明两站均具有较强的长记忆性,相对于R/S分析不易受短期相关性的影响,是一种稳健有效的分形方法,为河川径流长记忆性分析提供一种新的思路和方法。%To consider complex stochastic and undulant behaviors of river runoff,a method of rescaled-variance(V/S) analysis was applied to a long-memory analysis on the runoffs at the Lanzhou and Guide stations of Yellow River.The calculated Hurst indexes of the annual runoff series were compared with those by other methods,and a test on stability and short-term correlation was conducted.The V/S results shows that the indexes of the two stations are greater than 0.5,indicating a relatively strong long memory.In comparison with rescaled-range(R/S) analysis,V/S analysis is a robust and effective fractal method of runoff long-memory analysis,less vulnerable to the impact of short-term correlation,and this new method provides a new way of thinking.

  14. Estimation of sensible heat, water vapor, and CO2 fluxes using the flux-variance method.

    Science.gov (United States)

    Hsieh, Cheng-I; Lai, Mei-Chun; Hsia, Yue-Joe; Chang, Tsang-Jung

    2008-07-01

    This study investigated the flux-variance relationships of temperature, humidity, and CO(2), and examined the performance of using this method for predicting sensible heat (H), water vapor (LE), and CO(2) fluxes (F(CO2)) with eddy-covariance measured flux data at three different ecosystems: grassland, paddy rice field, and forest. The H and LE estimations were found to be in good agreement with the measurements over the three fields. The prediction accuracy of LE could be improved by around 15% if the predictions were obtained by the flux-variance method in conjunction with measured sensible heat fluxes. Moreover, the paddy rice field was found to be a special case where water vapor follows flux-variance relation better than heat does. However, the CO(2) flux predictions were found to vary from poor to fair among the three sites. This is attributed to the complicated CO(2) sources and sinks distribution. Our results also showed that heat and water vapor were transported with the same efficiency above the grassland and rice paddy. For the forest, heat was transported 20% more efficiently than evapotranspiration.

  15. Minimum variance geographic sampling

    Science.gov (United States)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  16. Numerical estimation of the noncompartmental pharmacokinetic parameters variance and coefficient of variation of residence times.

    Science.gov (United States)

    Purves, R D

    1994-02-01

    Noncompartmental investigation of the distribution of residence times from concentration-time data requires estimation of the second noncentral moment (AUM2C) as well as the area under the curve (AUC) and the area under the moment curve (AUMC). The accuracy and precision of 12 numerical integration methods for AUM2C were tested on simulated noisy data sets representing bolus, oral, and infusion concentration-time profiles. The root-mean-squared errors given by the best methods were only slightly larger than the corresponding errors in the estimation of AUC and AUMC. AUM2C extrapolated "tail" areas as estimated from a log-linear fit are biased, but the bias is minimized by application of a simple correction factor. The precision of estimates of variance of residence times (VRT) can be severely impaired by the variance of the extrapolated tails. VRT is therefore not a useful parameter unless the tail areas are small or can be shown to be estimated with little error. Estimates of the coefficient of variation of residence times (CVRT) and its square (CV2) are robust in the sense of being little affected by errors in the concentration values. The accuracy of estimates of CVRT obtained by optimum numerical methods is equal to or better than that of AUC and mean residence time estimates, even in data sets with large tail areas.

  17. Volatility investing with variance swaps

    OpenAIRE

    Härdle, Wolfgang Karl; Silyakova, Elena

    2010-01-01

    Traditionally volatility is viewed as a measure of variability, or risk, of an underlying asset. However recently investors began to look at volatility from a different angle. It happened due to emergence of a market for new derivative instruments - variance swaps. In this paper first we introduse the general idea of the volatility trading using variance swaps. Then we describe valuation and hedging methodology for vanilla variance swaps as well as for the 3-rd generation volatility derivativ...

  18. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  19. Bayesian Variance Component Estimation Using the Inverse-Gamma Class of Priors in a Nested Generalizability Design

    Science.gov (United States)

    Arenson, Ethan A.

    2009-01-01

    One of the problems inherent in variance component estimation centers around inadmissible estimates. Such estimates occur when there is more variability within groups, relative to between groups. This paper suggests a Bayesian approach to resolve inadmissibility by placing noninformative inverse-gamma priors on the variance components, and…

  20. Variance of phase fluctuations of waves propagating through a random medium

    Science.gov (United States)

    Chu, Nelson C.; Kong, Jin AU; Yueh, Simon H.; Nghiem, Son V.; Fleischman, Jack G.; Ayasli, Serpil; Shin, Robert T.

    1992-01-01

    As an electromagnetic wave propagates through a random scattering medium, such as a forest, its energy is attenuated and random phase fluctuations are induced. The magnitude of the random phase fluctuations induced is important in estimating how well a Synthetic Aperture Radar (SAR) can image objects within the scattering medium. The two-layer random medium model, consisting of a scattering layer between free space and ground, is used to calculate the variance of the phase fluctuations induced between a transmitter located above the random medium and a receiver located below the random medium. The scattering properties of the random medium are characterized by a correlation function of the random permittivity fluctuations. The effective permittivity of the random medium is first calculated using the strong fluctuation theory, which accounts for large permittivity fluctuations of the scatterers. The distorted Born approximation is used to calculate the first-order scattered field. A perturbation series for the phase of the received field in the Rytov approximation is then introduced and the variance of the phase fluctuations is also calculated assuming that the transmitter and receiver are in the paraxial limit of the random medium, which allows an analytic solution to be obtained. Results are compared using the paraxial approximation, scalar Green's function formulation, and dyadic Green's function formulation. The effects studied are the dependence of the variance of the phase fluctuations on receiver location in lossy and lossless regions, medium thickness, correlation length and fractional volume of scatterers, depolarization of the incident wave, ground layer permittivity, angle of incidence, and polarization.