WorldWideScience

Sample records for analysis of variance

  1. Nominal analysis of "variance".

    Science.gov (United States)

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  2. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  3. Analysis of Variance: Variably Complex

    Science.gov (United States)

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  4. Warped functional analysis of variance.

    Science.gov (United States)

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  5. Generalized analysis of molecular variance.

    Directory of Open Access Journals (Sweden)

    Caroline M Nievergelt

    2007-04-01

    Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

  6. Analysis of variance for model output

    NARCIS (Netherlands)

    Jansen, M.J.W.

    1999-01-01

    A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va

  7. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  8. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  9. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  10. Applications of non-parametric statistics and analysis of variance on sample variances

    Science.gov (United States)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  11. Functional analysis of variance for association studies.

    Directory of Open Access Journals (Sweden)

    Olga A Vsevolozhskaya

    Full Text Available While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1 it tests for a joint effect of gene variants, including both common and rare; (2 it fully utilizes linkage disequilibrium and genetic position information; and (3 allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM, - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.

  12. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    Science.gov (United States)

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  13. An Analysis of Variance Framework for Matrix Sampling.

    Science.gov (United States)

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  14. Meta-analysis of ratios of sample variances.

    Science.gov (United States)

    Prendergast, Luke A; Staudte, Robert G

    2016-05-20

    When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.

  15. Wavelet Variance Analysis of EEG Based on Window Function

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-zhuang; YOU Rong-yi

    2014-01-01

    A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.

  16. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  17. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  18. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    Science.gov (United States)

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  19. Analysis of variance in spectroscopic imaging data from human tissues.

    Science.gov (United States)

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  20. Analysis of Variance in the Modern Design of Experiments

    Science.gov (United States)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  1. Automated Extraction of Archaeological Traces by a Modified Variance Analysis

    Directory of Open Access Journals (Sweden)

    Tiziana D'Orazio

    2015-03-01

    Full Text Available This paper considers the problem of detecting archaeological traces in digital aerial images by analyzing the pixel variance over regions around selected points. In order to decide if a point belongs to an archaeological trace or not, its surrounding regions are considered. The one-way ANalysis Of VAriance (ANOVA is applied several times to detect the differences among these regions; in particular the expected shape of the mark to be detected is used in each region. Furthermore, an effect size parameter is defined by comparing the statistics of these regions with the statistics of the entire population in order to measure how strongly the trace is appreciable. Experiments on synthetic and real images demonstrate the effectiveness of the proposed approach with respect to some state-of-the-art methodologies.

  2. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  3. Correct use of repeated measures analysis of variance.

    Science.gov (United States)

    Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

    2009-02-01

    In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

  4. Analysis of variance of an underdetermined geodetic displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Darby, D.

    1982-06-01

    It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

  5. Objective Bayesian Comparison of Constrained Analysis of Variance Models.

    Science.gov (United States)

    Consonni, Guido; Paroli, Roberta

    2016-10-04

    In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.

  6. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  7. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

  8. Analysis of variance (ANOVA) models in lower extremity wounds.

    Science.gov (United States)

    Reed, James F

    2003-06-01

    Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

  9. Analysis of variance in neuroreceptor ligand imaging studies.

    Directory of Open Access Journals (Sweden)

    Ji Hyun Ko

    Full Text Available Radioligand positron emission tomography (PET with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA, and examine its feasibility using simulated [(11C]raclopride PET data. We also re-visit data from our previously published [(11C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.

  10. Analysis of variance in neuroreceptor ligand imaging studies.

    Science.gov (United States)

    Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P

    2011-01-01

    Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.

  11. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

    DEFF Research Database (Denmark)

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M;

    2014-01-01

    -based smoothing resulted in dramatically less bias and the least variance of the methods tested for smoothing levels 5mm and higher. When used in combination with PVC, surface-based smoothing minimized the bias without significantly increasing the variance. Surface-based smoothing resulted in 2-4 times less...... intersubject variance than when volume smoothing was used. This translates into more than 4 times fewer subjects needed in a group analysis to achieve similarly powered statistical tests. Surface-based smoothing has less bias and variance because it respects cortical geometry by smoothing the PET data only...

  12. Introduction to mixed modelling beyond regression and analysis of variance

    CERN Document Server

    Galwey, N W

    2007-01-01

    Mixed modelling is one of the most promising and exciting areas of statistical analysis, enabling more powerful interpretation of data through the recognition of random effects. However, many perceive mixed modelling as an intimidating and specialized technique.

  13. Structure analysis of interstellar clouds: II. Applying the Delta-variance method to interstellar turbulence

    CERN Document Server

    Ossenkopf, V; Stutzki, J

    2008-01-01

    The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. In paper I we proposed essential improvements to the Delta-variance analysis. In this paper we apply the improved Delta-variance analysis to i) a hydrodynamic turbulence simulation with prominent density and velocity structures, ii) an observed intensity map of rho Oph with irregular boundaries and variable uncertainties of the different data points, and iii) a map of the turbulent velocity structure in the Polaris Flare affected by the intensity dependence on the centroid velocity determination. The tests confirm the extended capabilities of the improved Delta-variance analysis. Prominent spatial scales were accurately identified and artifacts from a variable reliability of the data were removed. The analysis of the hydrodynamic simulations showed that the injection of a turbulent velocity structure creates the most prominent density structures are produced on a sca...

  14. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    Science.gov (United States)

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  15. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  16. A Mean-Variance Analysis of Self-Financing Portfolios

    OpenAIRE

    Bob Korkie; Harry J. Turtle

    2002-01-01

    This paper develops the analytics and geometry of the investment opportunity set (IOS) and the test statistics for self-financing portfolios. A self-financing portfolio is a set of long and short investments such that the sum of their investment weights, or net investment, is zero. This contrasts with a standard portfolio that has investment weights summing to one. Examples of self-financing portfolios are hedges, overlays, arbitrage portfolios, swaps, and long/short portfolios. A standard po...

  17. Gender variance on campus : a critical analysis of transgender voices

    OpenAIRE

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn, 2005; Beemyn, Curtis, Davis, & Tubbs, 2005). This study examined the perceptions of transgender inclusion, ways in which leadership structures or entiti...

  18. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    Science.gov (United States)

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  19. Analysis of variance and functional measurement a practical guide

    CERN Document Server

    Weiss, David J

    2006-01-01

    Chapter I. IntroductionChapter II. One-way ANOVAChapter III. Using the ComputerChapter IV. Factorial StructureChapter V. Two-way ANOVA Chapter VI. Multi-factor DesignsChapter VII. Error Purifying DesignsChapter VIII. Specific ComparisonsChapter IX. Measurement IssuesChapter X. Strength of Effect**Chapter XI. Nested Designs**Chapter XII. Missing Data**Chapter XIII. Confounded Designs**Chapter XIV. Introduction to Functional Measurement**Terms from Introductory Statistics References Subject Index Name Index

  20. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  1. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    Science.gov (United States)

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  2. A Demonstration of the Analysis of Variance Using Physical Movement and Space

    Science.gov (United States)

    Owen, William J.; Siakaluk, Paul D.

    2011-01-01

    Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

  3. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    Science.gov (United States)

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  4. Use of hypotheses for analysis of variance models: challenging the current practice

    NARCIS (Netherlands)

    van Wesel, F.; Boeije, H.R.; Hoijtink, H.

    2013-01-01

    In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current practic

  5. Pairwise Comparison Procedures for One-Way Analysis of Variance Designs. Research Report.

    Science.gov (United States)

    Zwick, Rebecca

    Research in the behavioral and health sciences frequently involves the application of one-factor analysis of variance models. The goal may be to compare several independent groups of subjects on a quantitative dependent variable or to compare measurements made on a single group of subjects on different occasions or under different conditions. In…

  6. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    Science.gov (United States)

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  7. Publishing nutrition research: a review of multivariate techniques--part 2: analysis of variance.

    Science.gov (United States)

    Harris, Jeffrey E; Sheean, Patricia M; Gleason, Philip M; Bruemmer, Barbara; Boushey, Carol

    2012-01-01

    This article is the eighth in a series exploring the importance of research design, statistical analysis, and epidemiology in nutrition and dietetics research, and the second in a series focused on multivariate statistical analytical techniques. The purpose of this review is to examine the statistical technique, analysis of variance (ANOVA), from its simplest to multivariate applications. Many dietetics practitioners are familiar with basic ANOVA, but less informed of the multivariate applications such as multiway ANOVA, repeated-measures ANOVA, analysis of covariance, multiple ANOVA, and multiple analysis of covariance. The article addresses all these applications and includes hypothetical and real examples from the field of dietetics.

  8. Study on Analysis of Variance on the indigenous wild and cultivated rice species of Manipur Valley

    Science.gov (United States)

    Medhabati, K.; Rohinikumar, M.; Rajiv Das, K.; Henary, Ch.; Dikash, Th.

    2012-10-01

    The analysis of variance revealed considerable variation among the cultivars and the wild species for yield and other quantitative characters in both the years of investigation. The highly significant differences among the cultivars in year wise and pooled analysis of variance for all the 12 characters reveal that there are enough genetic variabilities for all the characters studied. The existence of genetic variability is of paramount importance for starting a judicious plant breeding programme. Since introduced high yielding rice cultivars usually do not perform well. Improvement of indigenous cultivars is a clear choice for increase of rice production. The genetic variability of 37 rice germplasms in 12 agronomic characters estimated in the present study can be used in breeding programme

  9. Analysis of variance of quantitative parameters bidders offers for public procurement in the chosen sector

    OpenAIRE

    Gavorníková, Katarína

    2012-01-01

    Goal of this work was to found out which determinants and in what direction influence variance of price biddings offered by bidders for public procurement, as well as their behavior during selection process. This work aimed on public procurement for construction works declared by municipal procurement authority. Regression analysis confirmed the variable estimated price and ratio of final and estimated price of public procurement as the strongest influences. Increasing estimated price raises ...

  10. Structure analysis of interstellar clouds: I. Improving the Delta-variance method

    CERN Document Server

    Ossenkopf, V; Stutzki, J

    2008-01-01

    The Delta-variance analysis, has proven to be an efficient and accurate method of characterising the power spectrum of interstellar turbulence. The implementation presently in use, however, has several shortcomings. We propose and test an improved Delta-variance algorithm for two-dimensional data sets, which is applicable to maps with variable error bars and which can be quickly computed in Fourier space. We calibrate the spatial resolution of the Delta-variance spectra. The new Delta-variance algorithm is based on an appropriate filtering of the data in Fourier space. It allows us to distinguish the influence of variable noise from the actual small-scale structure in the maps and it helps for dealing with the boundary problem in non-periodic and/or irregularly bounded maps. We try several wavelets and test their spatial sensitivity using artificial maps with well known structure sizes. It turns out that different wavelets show different strengths with respect to detecting characteristic structures and spectr...

  11. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  12. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    Science.gov (United States)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  13. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

    Science.gov (United States)

    Peng, Yichuan; Lord, Dominique; Zou, Yajie

    2014-12-01

    As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model.

  14. The analysis of variance in anaesthetic research: statistics, biography and history.

    Science.gov (United States)

    Pandit, J J

    2010-12-01

    Multiple t-tests (or their non-parametric equivalents) are often used erroneously to compare the means of three or more groups in anaesthetic research. Methods for correcting the p value regarded as significant can be applied to take account of multiple testing, but these are somewhat arbitrary and do not avoid several unwieldy calculations. The appropriate method for most such comparisons is the 'analysis of variance' that not only economises on the number of statistical procedures, but also indicates if underlying factors or sub-groups have contributed to any significant results. This article outlines the history, rationale and method of this analysis.

  15. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis.

    Science.gov (United States)

    Luthria, Devanand L; Lin, Long-Ze; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-11-12

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of two cultivars of broccoli, Majestic and Legacy, the first grown with four different levels of Se and the second grown organically and conventionally with two rates of irrigation. Chemical composition differences in the two cultivars and seven treatments produced patterns that were visually and statistically distinguishable using ANOVA-PCA. PCA loadings allowed identification of the molecular and fragment ions that provided the most significant chemical differences. A standardized profiling method for phenolic compounds showed that important discriminating ions were not phenolic compounds. The elution times of the discriminating ions and previous results suggest that they were common sugars and organic acids. ANOVA calculations of the positive and negative ionization MS fingerprints showed that 33% of the variance came from the cultivar, 59% from the growing treatment, and 8% from analytical uncertainty. Although the positive and negative ionization fingerprints differed significantly, there was no difference in the distribution of variance. High variance of individual masses with cultivars or growing treatment was correlated with high PCA loadings. The ANOVA data suggest that only variables with high variance for analytical uncertainty should be deleted. All other variables represent discriminating masses that allow separation of the samples with respect to cultivar and treatment.

  16. Methods and applications of linear models regression and the analysis of variance

    CERN Document Server

    Hocking, Ronald R

    2013-01-01

    Praise for the Second Edition"An essential desktop reference book . . . it should definitely be on your bookshelf." -Technometrics A thoroughly updated book, Methods and Applications of Linear Models: Regression and the Analysis of Variance, Third Edition features innovative approaches to understanding and working with models and theory of linear regression. The Third Edition provides readers with the necessary theoretical concepts, which are presented using intuitive ideas rather than complicated proofs, to describe the inference that is appropriate for the methods being discussed. The book

  17. Structure analysis of simulated molecular clouds with the Delta-variance

    CERN Document Server

    Bertram, Erik; Glover, Simon C O

    2015-01-01

    We employ the Delta-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n_0 = 30, 100 and 300 cm^{-3} that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Delta-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 -> 0) lines. The spectral slopes of the Delta-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth-size relation ranging from 0.4 to 0....

  18. Analysis of variance: is there a difference in means and what does it mean?

    Science.gov (United States)

    Kao, Lillian S; Green, Charles E

    2008-01-01

    To critically evaluate the literature and to design valid studies, surgeons require an understanding of basic statistics. Despite the increasing complexity of reported statistical analyses in surgical journals and the decreasing use of inappropriate statistical methods, errors such as in the comparison of multiple groups still persist. This review introduces the statistical issues relating to multiple comparisons, describes the theoretical basis behind analysis of variance (ANOVA), discusses the essential differences between ANOVA and multiple t-tests, and provides an example of the computations and computer programming used in performing ANOVA.

  19. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model.

    Science.gov (United States)

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com.

  20. ANALYSIS OF VARIANCE ECONOMICALLY VALUABLE TRAITS AND INTERIOR INDICATORS PIGS AT USE OF DIETARY SUPPLEMENTS "SELENIUM VITA" AND "TOPINAMBUR"

    OpenAIRE

    Fedorov V. H.; Gribtsova T. V.

    2015-01-01

    Studies were conducted on pure-bred pigs CT and DM-1. An analysis of variance of economically useful signs and interior indicators pigs using dietary supplements "Vita selenium" and "Jerusalem artichoke"

  1. Identification of mitochondrial proteins of malaria parasite using analysis of variance.

    Science.gov (United States)

    Ding, Hui; Li, Dongmei

    2015-02-01

    As a parasitic protozoan, Plasmodium falciparum (P. falciparum) can cause malaria. The mitochondrial proteins of malaria parasite play important roles in the discovery of anti-malarial drug targets. Thus, accurate identification of mitochondrial proteins of malaria parasite is a key step for understanding their functions and finding potential drug targets. In this work, we developed a sequence-based method to identify the mitochondrial proteins of malaria parasite. At first, we extended adjoining dipeptide composition to g-gap dipeptide composition for discretely formulating the protein sequences. Subsequently, the analysis of variance (ANOVA) combined with incremental feature selection (IFS) was used to pick out the optimal features. Finally, the jackknife cross-validation was used to evaluate the performance of the proposed model. Evaluation results showed that the maximum accuracy of 97.1% could be achieved by using 101 optimal 5-gap dipeptides. The comparison with previous methods demonstrated that our method was accurate and efficient.

  2. Analysis of variance on thickness and electrical conductivity measurements of carbon nanotube thin films

    Science.gov (United States)

    Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard

    2016-09-01

    One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.

  3. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  4. Meta-analysis of binary data: which within study variance estimate to use?

    Science.gov (United States)

    Chang, B H; Waternaux, C; Lipsitz, S

    2001-07-15

    We applied a mixed effects model to investigate between- and within-study variation in improvement rates of 180 schizophrenia outcome studies. The between-study variation was explained by the fixed study characteristics and an additional random study effect. Both rate difference and logit models were used. For a binary proportion outcome p(i) with sample size n(i) in the ith study, (circumflexp(i)(1-circumflexp(i))n)(-1) is the usual estimate of the within-study variance sigma(i)(2) in the logit model, where circumflexpi) is the sample mean of the binary outcome for subjects in study i. This estimate can be highly correlated with logit(circumflexp(i)). We used (macronp(i)(1-macronp)n(i))(-1) as an alternative estimate of sigma(i)(2), where macronp is the weighted mean of circumflexp(i)'s. We estimated regression coefficients (beta) of the fixed effects and the variance (tau(2)) of the random study effect using a quasi-likelihood estimating equations approach. Using the schizophrenia meta-analysis data, we demonstrated how the choice of the estimate of sigma(2)(i) affects the resulting estimates of beta and tau(2). We also conducted a simulation study to evaluate the performance of the two estimates of sigma(2)(i) in different conditions, where the conditions vary by number of studies and study size. Using the schizophrenia meta-analysis data, the estimates of beta and tau(2) were quite different when different estimates of sigma(2)(i) were used in the logit model. The simulation study showed that the estimates of beta and tau(2) were less biased, and the 95 per cent CI coverage was closer to 95 per cent when the estimate of sigma(2)(i) was (macronp(1-macronp)n(i))(-1) rather than (circumflexp(i)(1-circumflexp)n(i))(-1). Finally, we showed that a simple regression analysis is not appropriate unless tau(2) is much larger than sigma(2)(i), or a robust variance is used.

  5. Multivariate analysis of variance of designed chromatographic data. A case study involving fermentation of rooibos tea.

    Science.gov (United States)

    Marini, Federico; de Beer, Dalene; Walters, Nico A; de Villiers, André; Joubert, Elizabeth; Walczak, Beata

    2017-03-17

    An ultimate goal of investigations of rooibos plant material subjected to different stages of fermentation is to identify the chemical changes taking place in the phenolic composition, using an untargeted approach and chromatographic fingerprints. Realization of this goal requires, among others, identification of the main components of the plant material involved in chemical reactions during the fermentation process. Quantitative chromatographic data for the compounds for extracts of green, semi-fermented and fermented rooibos form the basis of preliminary study following a targeted approach. The aim is to estimate whether treatment has a significant effect based on all quantified compounds and to identify the compounds, which contribute significantly to it. Analysis of variance is performed using modern multivariate methods such as ANOVA-Simultaneous Component Analysis, ANOVA - Target Projection and regularized MANOVA. This study is the first one in which all three approaches are compared and evaluated. For the data studied, all tree methods reveal the same significance of the fermentation effect on the extract compositions, but they lead to its different interpretation.

  6. Contrasting regional architectures of schizophrenia and other complex diseases using fast variance components analysis

    DEFF Research Database (Denmark)

    Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander;

    2015-01-01

    Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC...... and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1-Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions....... To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale....

  7. Structural damage detection in an aeronautical panel using analysis of variance

    Science.gov (United States)

    Gonsalez, Camila Gianini; da Silva, Samuel; Brennan, Michael J.; Lopes Junior, Vicente

    2015-02-01

    This paper describes a procedure for structural health assessment based on one-way analysis of variance (ANOVA) together with Tukey's multiple comparison test, to determine whether the results are statistically significant. The feature indices are obtained from electromechanical impedance measurements using piezoceramic sensor/actuator patches bonded to the structure. Compared to the classical approach based on a simple change of the observed signals, using for example root mean square responses, the decision procedure in this paper involves a rigorous statistical test. Experimental tests were carried out on an aeronautical panel in the laboratory to validate the approach. In order to include uncontrolled variability in the dynamic responses, the measurements were taken over several days in different environmental conditions using all eight sensor/actuator patches. The damage was simulated by controlling the tightness and looseness of the bolts and was correctly diagnosed. The paper discusses the strengths and weakness of the approach in light of the experimental results.

  8. Minimum variance imaging based on correlation analysis of Lamb wave signals.

    Science.gov (United States)

    Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi

    2016-08-01

    In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance.

  9. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review

    CERN Document Server

    Malkin, Zinovy

    2016-01-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing of the frequency standards deviations. For the past decades, AVAR has increasingly being used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with the clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. Besides, some physically connected scalar time series naturally form series of multi-dimensional vectors. For example, three station coordinates time series $X$, $Y$, and $Z$ can be combined to analyze 3D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multi-dimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multi-dimensional AVAR (MAVAR), and weighted multi-dimensional AVAR (WMAVAR), were introduced to overcome these ...

  10. Analysis of variance of communication latencies in anesthesia: comparing means of multiple log-normal distributions.

    Science.gov (United States)

    Ledolter, Johannes; Dexter, Franklin; Epstein, Richard H

    2011-10-01

    Anesthesiologists rely on communication over periods of minutes. The analysis of latencies between when messages are sent and responses obtained is an essential component of practical and regulatory assessment of clinical and managerial decision-support systems. Latency data including times for anesthesia providers to respond to messages have moderate (> n = 20) sample sizes, large coefficients of variation (e.g., 0.60 to 2.50), and heterogeneous coefficients of variation among groups. Highly inaccurate results are obtained both by performing analysis of variance (ANOVA) in the time scale or by performing it in the log scale and then taking the exponential of the result. To overcome these difficulties, one can perform calculation of P values and confidence intervals for mean latencies based on log-normal distributions using generalized pivotal methods. In addition, fixed-effects 2-way ANOVAs can be extended to the comparison of means of log-normal distributions. Pivotal inference does not assume that the coefficients of variation of the studied log-normal distributions are the same, and can be used to assess the proportional effects of 2 factors and their interaction. Latency data can also include a human behavioral component (e.g., complete other activity first), resulting in a bimodal distribution in the log-domain (i.e., a mixture of distributions). An ANOVA can be performed on a homogeneous segment of the data, followed by a single group analysis applied to all or portions of the data using a robust method, insensitive to the probability distribution.

  11. Analysis of variance with unbalanced data: an update for ecology & evolution.

    Science.gov (United States)

    Hector, Andy; von Felten, Stefanie; Schmid, Bernhard

    2010-03-01

    1. Factorial analysis of variance (anova) with unbalanced (non-orthogonal) data is a commonplace but controversial and poorly understood topic in applied statistics. 2. We explain that anova calculates the sum of squares for each term in the model formula sequentially (type I sums of squares) and show how anova tables of adjusted sums of squares are composite tables assembled from multiple sequential analyses. A different anova is performed for each explanatory variable or interaction so that each term is placed last in the model formula in turn and adjusted for the others. 3. The sum of squares for each term in the analysis can be calculated after adjusting only for the main effects of other explanatory variables (type II sums of squares) or, controversially, for both main effects and interactions (type III sums of squares). 4. We summarize the main recent developments and emphasize the shift away from the search for the 'right'anova table in favour of presenting one or more models that best suit the objectives of the analysis.

  12. Longitudinal variance-components analysis of the Framingham Heart Study data.

    Science.gov (United States)

    Macgregor, Stuart; Knott, Sara A; White, Ian; Visscher, Peter M

    2003-12-31

    The Framingham Heart Study offspring cohort, a complex data set with irregularly spaced longitudinal phenotype data, was made available as part of Genetic Analysis Workshop 13. To allow an analysis of all of the data simultaneously, a mixed-model- based random-regression (RR) approach was used. The RR accounted for the variation in genetic effects (including marker-specific quantitative trait locus (QTL) effects) across time by fitting polynomials of age. The use of a mixed model allowed both fixed (such as sex) and random (such as familial environment) effects to be accounted for appropriately. Using this method we performed a QTL analysis of all of the available adult phenotype data (26,106 phenotypic records). In addition to RR, conventional univariate variance component techniques were applied. The traits of interest were BMI, HDLC, total cholesterol, and height. The longitudinal method allowed the characterization of the change in QTL effects with aging. A QTL affecting BMI was shown to act mainly at early ages.

  13. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    Energy Technology Data Exchange (ETDEWEB)

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  14. CAIXA: a catalogue of AGN in the XMM-Newton archive III. Excess Variance Analysis

    CERN Document Server

    Ponti, Gabriele; Bianchi, Stefano; Guainazzi, Matteo; Matt, Giorgio; Uttley, Phil; Bonilla, Fonseca; Nuria,

    2011-01-01

    We report on the results of the first XMM systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM for more than 10 ks in pointed observations which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. We compute the excess variance for all AGN, on different time-scales (10, 20, 40 and 80 ks) and in different energy bands (0.3-0.7, 0.7-2 and 2-10 keV). We observe a highly significant and tight (~0.7 dex) correlation between excess variance and MBH. The subsample of reverberation mapped AGN shows an even smaller scatter (~0.45 dex) comparable to the one induced by the MBH uncertainties. This implies that X-ray variability can be used as an accurate tool to measure MBH and this method is more accurate than the ones based on single epoch optical spectra. The excess variance vs. accretion rate dependence is weaker than expected based on the PSD break frequency scaling, suggesting that both...

  15. FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons

    Science.gov (United States)

    Fordyce, Michael W.

    1977-01-01

    A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)

  16. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  17. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  18. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    Energy Technology Data Exchange (ETDEWEB)

    Westner, Guenther; Madlener, Reinhard [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany)

    2010-12-15

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  19. The benefit of regional diversification of cogeneration investments in Europe: A mean-variance portfolio analysis

    Energy Technology Data Exchange (ETDEWEB)

    Westner, Guenther, E-mail: guenther.westner@eon-energie.co [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany); Madlener, Reinhard, E-mail: rmadlener@eonerc.rwth-aachen.d [Institute for Future Energy Consumer Needs and Behavior (FCN), Faculty of Business and Economics/E.ON Energy Research Center, RWTH Aachen University, Mathieustrasse 6, 52074 Aachen (Germany)

    2010-12-15

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. - Research highlights: {yields}Preconditions for CHP investments differ significantly between the EU member states. {yields}Regional diversification of CHP investments can reduce the total portfolio risk. {yields}Risk reduction depends on the chosen CHP technology.

  20. The Variance of Language in Different Contexts

    Institute of Scientific and Technical Information of China (English)

    申一宁

    2012-01-01

    language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.

  1. MEASURING DRIVERS’ EFFECT IN A COST MODEL BY MEANS OF ANALYSIS OF VARIANCE

    Directory of Open Access Journals (Sweden)

    Maria Elena Nenni

    2013-01-01

    Full Text Available In this study the author goes through with the analysis of a cost model developed for Integrated Logistic Support (ILS activities. By means of ANOVA the evaluation of impact and interaction among cost drivers is done. The predominant importance of organizational factors compared to technical ones is definitely demonstrated. Moreover the paper provides researcher and practitioners with useful information to improve the cost model as well as for budgeting and financial planning of ILS activities.

  2. An introduction to analysis of variance (ANOVA) with special reference to data from clinical experiments in optometry.

    Science.gov (United States)

    Armstrong, R A; Slade, S V; Eperjesi, F

    2000-05-01

    This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed.

  3. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  4. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  5. 使用SPSS软件进行多因素方差分析%Application of SPSS Software in Multivariate Analysis of Variance

    Institute of Scientific and Technical Information of China (English)

    龚江; 石培春; 李春燕

    2012-01-01

    以两因素完全随机有重复的试验为例,阐述用SPSS软进行方差分析的详细过程,包括数据的输入、变异来源的分析,方差分析结果,以及显著性检验,最后还对方差分析注意事项进行分析,为科技工作者使用SPSS软进方差分析提供参考。%An example about two factors multiple completely random design analysis of variance was given and the detailed process of analysis of variance in SPSS software was elaborated,including the data input,he source analysis of the variance,the result of analysis of variance,the test of significance,etc.At last,precautions on the analysis of variance with SPSS software were given,providing references to the analysis of variance with SPSS software for scientific research workers.

  6. Biomarker profiling and reproducibility study of MALDI-MS measurements of Escherichia coli by analysis of variance-principal component analysis.

    Science.gov (United States)

    Chen, Ping; Lu, Yao; Harrington, Peter B

    2008-03-01

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) has proved useful for the characterization of bacteria and the detection of biomarkers. Key challenges for MALDI-MS measurements of bacteria are overcoming the relatively large variability in peak intensities. A soft tool, combining analysis of variance and principal component analysis (ANOVA-PCA) (Harrington, P. D.; Vieira, N. E.; Chen, P.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Chemom. Intell. Lab. Syst. 2006, 82, 283-293. Harrington, P. D.; Vieira, N. E.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Anal. Chim. Acta. 2005, 544, 118-127) was applied to investigate the effects of the experimental factors associated with MALDI-MS studies of microorganisms. The variance of the measurements was partitioned with ANOVA and the variance of target factors combined with the residual error was subjected to PCA to provide an easy to understand statistical test. The statistical significance of these factors can be visualized with 95% Hotelling T2 confidence intervals. ANOVA-PCA is useful to facilitate the detection of biomarkers in that it can remove the variance corresponding to other experimental factors from the measurements that might be mistaken for a biomarker. Four strains of Escherichia coli at four different growth ages were used for the study of reproducibility of MALDI-MS measurements. ANOVA-PCA was used to disclose potential biomarker proteins associated with different growth stages.

  7. Statistical Approaches in Analysis of Variance: from Random Arrangements to Latin Square Experimental Design

    OpenAIRE

    2009-01-01

    Background: The choices of experimental design as well as of statisticalanalysis are of huge importance in field experiments. These are necessary tobe correctly in order to obtain the best possible precision of the results. Therandom arrangements, randomized blocks and Latin square designs werereviewed and analyzed from the statistical perspective of error analysis.Material and Method: Random arrangements, randomized block and Latinsquares experimental designs were used as field experiments. ...

  8. Analysis of variance in acute mountain sickness among young men from different regions of China

    Directory of Open Access Journals (Sweden)

    Yu WU

    2014-10-01

    Full Text Available Objective To investigate the incidence of acute mountain sickness (AMS among young men from different regions when arriving in Tibet, and explore the medical geographic differences of high altitude adaptability of people from different regions. Methods Cluster sampling survey of AMS incidence was performed among the young men from different regions when arriving in high altitude area, by using the AMS symptoms scoring method, and the military standards were employed as reference, for classifying and scoring. For distinguishing the differences of geographic environment, the systematic cluster analysis of natural geographical factors of their native places was performed and verified by nonparametric tests. The one-way ANOVA was used to analyze the differences of AMS symptom scores among young men from different regions. Results The native places of the studied subjects were divided into 5 regions by cluster analysis, and the geographic factors among the 5 regions were found to be significantly different (P<0.01. It was found that there were significant differences in the AMS incidence among people came from different regions (P<0.05. Specifically, the AMS incidence was significantly higher (P<0.05 in people from region 2 than in people from region 3, 4 and 5. In terms of main symptoms of AMS, the incidence of headache in people from region 2 was 82.8%, and it was significantly different (P<0.05 from that of those coming from regions 3, 4 and 5; the incidence of nausea and vomiting was 37.9%, and it was significantly different (P<0.05 from that of those coming from region 3; the incidence of fatigue and drowsiness was 724.% and 27.6%, and it was significantly different (P<0.05 from that of those coming from region 5. The incidence of vertigo in people from region 1 and 3 was significantly different (P<0.05 from that of those coming from region 5. Conclusions The significant geographic differences of AMS incidence are found to exist among

  9. An Empirical Study Based on the SPSS Variance Analysis of College Teachers' Sports Participation and Satisfaction

    OpenAIRE

    Yunqiu Liang

    2013-01-01

    The study on University Teachers ' sports participation and their job satisfaction relationship for empirical research, mainly from the group to participate in sports activities situation on the object of study, investigation and mathematical statistics analysis SPSS. Results show that sports groups participate in job satisfaction higher than those in groups of job satisfaction; sports participation, different job satisfaction is also different. Recommendations for college teachers to address...

  10. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares

    Energy Technology Data Exchange (ETDEWEB)

    Boccard, Julien, E-mail: julien.boccard@unige.ch; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of

  11. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.

  12. An Empirical Study Based on the SPSS Variance Analysis of College Teachers' Sports Participation and Satisfaction

    Directory of Open Access Journals (Sweden)

    Yunqiu Liang

    2013-04-01

    Full Text Available The study on University Teachers ' sports participation and their job satisfaction relationship for empirical research, mainly from the group to participate in sports activities situation on the object of study, investigation and mathematical statistics analysis SPSS. Results show that sports groups participate in job satisfaction higher than those in groups of job satisfaction; sports participation, different job satisfaction is also different. Recommendations for college teachers to address their life and the characteristics of their own conditions choose to suit your participation, improve psychological and physiological health, timely adjust the mood state, to positive psychological state of work, high job satisfaction. Different organizations accord to their occupation characteristics and available resources and actively guide the organization members form a more scientific and reasonable habits and ways of life, fully mobilize the enthusiasm of participating in the fitness activity, to create a more intense fitness culture atmosphere, in order to improve internal cohesion and centripetal force, improve job satisfaction.

  13. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

    Directory of Open Access Journals (Sweden)

    VÎLCU Adrian

    2014-05-01

    Full Text Available The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA can be employed. ANCOVA shows the correlation between a dependent variable and the covariate independent variables and removes the variability from the dependent variable that can be accounted by the covariates. The independent and dependent variable structures for Multiple Regression, factorial ANOVA and ANCOVA tests are similar. ANCOVA is differentiated from the other two in that it is used when the researcher wants to neutralize the effect of a continuous independent variable in the experiment. The researcher may simply not be interested in the effect of a given independent variable when performing a study. Another situation where ANCOVA should be applied is when an independent variable has a strong correlation with the dependent variable, but does not interact with other independent variables in predicting the dependent variable’s value. ANCOVA is used to neutralize the effect of the more powerful, non-interacting variable. Without this intervention measure, the effects of interacting independent variables can be clouded

  14. An analysis of the influences of biological variance, measurement error, and uncertainty on retinal photothermal damage threshold studies

    Science.gov (United States)

    Wooddell, David A., Jr.; Schubert-Kabban, Christine M.; Hill, Raymond R.

    2012-03-01

    Safe exposure limits for directed energy sources are derived from a compilation of known injury thresholds taken primarily from animal models and simulation data. The summary statistics for these experiments are given as exposure levels representing a 50% probability of injury, or ED50, and associated variance. We examine biological variance in focal geometries and thermal properties and the influence each has in singlepulse ED50 threshold studies for 514-, 694-, and 1064-nanometer laser exposures in the thermal damage time domain. Damage threshold is defined to be the amount of energy required for a retinal burn on at least one retinal pigment epithelium (RPE) cell measuring approximately 10 microns in diameter. Better understanding of experimental variance will allow for more accurate safety buffers for exposure limits and improve directed energy research methodology.

  15. Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse

    Institute of Scientific and Technical Information of China (English)

    Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou

    2016-01-01

    In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.

  16. Multiplicative correction of subject effect as preprocessing for analysis of variance.

    Science.gov (United States)

    Nemoto, Iku; Abe, Masaya; Kotani, Makoto

    2008-03-01

    The procedure of repeated-measures ANOVA assumes the linear model in which effects of both subjects and experimental conditions are additive. However, in electroencephalography and magnetoencephalography, there may be situations where subject effects should be considered to be multiplicative in amplitude. We propose a simple method to normalize such data by multiplying each subject's response by a subject-specific constant. This paper derives ANOVA tables for such normalized data. Present simulations show that this method performs ANOVA effectively including multiple comparisons provided that the data follows the multiplicative model.

  17. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    Science.gov (United States)

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss.

  18. Combining analysis of variance and three‐way factor analysis methods for studying additive and multiplicative effects in sensory panel data

    DEFF Research Database (Denmark)

    Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun

    2015-01-01

    Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...... in the use of the scale with reference to the existing structure of relationships between sensory descriptors. The multivariate assessor model will be tested on a data set from milk. Relations between the proposed model and other multiplicative models like parallel factor analysis and analysis of variance...

  19. Analysis of Variance in Vocabulary Learning Strategies Theory and Practice: A Case Study in Libya

    Science.gov (United States)

    Khalifa, Salma H. M.; Shabdin, Ahmad Affendi

    2016-01-01

    The present study is an outcome of a concern for the teaching of English as a foreign language (EFL) in Libyan schools. Learning of a foreign language is invariably linked to learners building a good repertoire of vocabulary of the target language, which takes us to the theory and practice of imparting training in vocabulary learning strategies…

  20. Analysis of Variance in Vocabulary Learning Strategies Theory and Practice: A Case Study in Libya

    OpenAIRE

    Salma H M Khalifa; Ahmad Affendi Shabdin

    2016-01-01

    The present study is an outcome of a concern for the teaching of English as a foreign language (EFL) in Libyan schools. Learning of a foreign language is invariably linked to learners building a good repertoire of vocabulary of the target language, which takes us to the theory and practice of imparting training in vocabulary learning strategies (VLSs) to learners. The researcher observed that there exists a divergence in theoretical knowledge of VLSs and practically training learners in using...

  1. Analysis of Variance in Vocabulary Learning Strategies Theory and Practice: A Case Study in Libya

    Directory of Open Access Journals (Sweden)

    Salma H M Khalifa

    2016-06-01

    Full Text Available The present study is an outcome of a concern for the teaching of English as a foreign language (EFL in Libyan schools. Learning of a foreign language is invariably linked to learners building a good repertoire of vocabulary of the target language, which takes us to the theory and practice of imparting training in vocabulary learning strategies (VLSs to learners. The researcher observed that there exists a divergence in theoretical knowledge of VLSs and practically training learners in using the strategies in EFL classes in Libyan schools. To empirically examine the situation, a survey was conducted with secondary school English teachers. The study discusses the results of the survey. The results show that teachers of English in secondary school in Libya are either not aware of various vocabulary learning strategies, or if they are, they do not impart training in all VLSs as they do not realize that to achieve good results in language learning, a judicious use of all VLSs is required. Though the study was conducted on a small scale, the results are highly encouraging. Keywords: vocabulary learning strategies, vocabulary learning theory, teaching of vocabulary learning strategies

  2. A variance analysis of the capacity displaced by wind energy in Europe

    DEFF Research Database (Denmark)

    Giebel, Gregor

    2007-01-01

    Wind energy generation distributed all over Europe is less variable than generation from a single region. To analyse the benefits of distributed generation, the whole electrical generation system of Europe has been modelled including varying penetrations of wind power. The model is chronologically...... simulating the scheduling of the European power plants to cover the demand at every hour of the year. The wind power generation was modelled using wind speed measurements from 60 meteorological stations, for 1 year. The distributed wind power also displaces fossil-fuelled capacity. However, every assessment...

  3. Experimental design, analysis of variance and slide quality assessment in gene expression arrays.

    Science.gov (United States)

    Draghici, S; Kuklin, A; Hoff, B; Shams, S

    2001-05-01

    A microarray experiment is a sequence of complicated molecular biology procedures relying on various laboratory tools, instrumentation and experimenter's skills. This paper discusses statistical models for distinguishing small changes in gene expression from the noise in the system. It describes methods for assigning statistical confidence to gene expression values derived from a single array slide. Some of the theory is discussed in the context of practical applications via software usage.

  4. Two-dimensional finite-element temperature variance analysis

    Science.gov (United States)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  5. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...

  6. Application of multivariate analysis of vari-ance (MANOVA to distance refractive vari-ability and mean distance refractive state

    Directory of Open Access Journals (Sweden)

    S Abelman

    2006-01-01

    Full Text Available Refractive state can be regarded as a dynam-ic quantity. Multiple measurements of refractive state can be determined easily and rapidly on a number of different occasions using an autore-fractor. In an experimental trial undertaken by Gillan, a 30-year-old female was subjected to 30 autorefractor measurements each taken at vari-ous intervals before and after the instillation of Mydriacyl 1% (tropicamide into her right eye. The purpose of this paper is to apply multivar-iate analysis of variance (MANOVA to Gillan’s sample data in order to assess whether instillation of Mydriacyl into the eye affects variability of distance refractive state as well as mean distance refractive state as measured by an autorefractor. In  five  of  the  seven  cases  where  pairwise hypotheses  tests  were  performed,  it  is  con-cluded that at a 99% level of confidence there is no difference in variability of distance refrac-tive state before and after cycloplegia. In two of the three cases where MANOVA was applied, there is a significant difference at a 95% and at a 99% level of confidence in both variability of distance refractive state and mean distance refractive  state  with  and  without  cycloplegia.

  7. Analysis of variance in determinations of equivalence volume and of the ionic product of water in potentiometric titrations.

    Science.gov (United States)

    Braibanti, A; Bruschi, C; Fisicaro, E; Pasquali, M

    1986-06-01

    Homogeneous sets of data from strong acid-strong base potentiometric titrations in aqueous solution at various constant ionic strengths have been analysed by statistical criteria. The aim is to see whether the error distribution matches that for the equilibrium constants determined by competitive potentiometric methods using the glass electrode. The titration curve can be defined when the estimated equivalence volume VEM, with standard deviation (s.d.) sigma (VEM), the standard potential E(0), with s.d. sigma(E(0)), and the operational ionic product of water K(*)(w) (or E(*)(w) in mV), with s.d. sigma(K(*)(w)) [or sigma(E(*)(w))] are known. A special computer program, BEATRIX, has been written which optimizes the values of VEM, E(0) and K(*)(w) by linearization of the titration curve as a Gran plot. Analysis of variance applied to a set of 11 titrations in 1.0M sodium chloride medium at 298 K has demonstrated that the values of VEM belong to a normal population of points corresponding to individual potential/volume data-pairs (E(i); v(i)) of any titration, whereas the values of pK(*)(w) (or of E(*)(w)) belong to a normal population with members corresponding to individual titrations, which is also the case for the equilibrium constants. The intertitration variation is attributable to the electrochemical component of the system and appears as signal noise distributed over the titrations. The correction for junction-potentials, introduced in a further stage of the program by optimization in a Nernst equation, increases the noise, i.e., sigma(pK(*)(w)). This correction should therefore be avoided whenever it causes an increase of sigma(pK(*)(w)). The influence of the ionic medium has been examined by processing data from acid-base titrations in 0.1M potassium chloride and 0.5M potassium nitrate media. The titrations in potassium chloride medium showed the same behaviour as those in sodium chloride medium, but with an s.d. for pK(*)(w) that was smaller and close to the

  8. Simultaneous optimal estimates of fixed effects and variance components in the mixed model

    Institute of Scientific and Technical Information of China (English)

    WU Mixia; WANG Songgui

    2004-01-01

    For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.

  9. Discrimination of frequency variance for tonal sequencesa)

    OpenAIRE

    Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.

    2014-01-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTA...

  10. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A

    2007-01-01

    OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...... been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...

  11. Discrimination of frequency variance for tonal sequences.

    Science.gov (United States)

    Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A

    2014-12-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) >  σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.

  12. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    Science.gov (United States)

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  13. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2009-11-01

    Full Text Available Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3/(4pisigma(2 seconds where mu is the mean period of an oscillator in seconds and sigma(2 its variance in seconds(2. We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

  14. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

    Science.gov (United States)

    Zilli, Eric A; Yoshida, Motoharu; Tahvildari, Babak; Giocomo, Lisa M; Hasselmo, Michael E

    2009-11-01

    Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol) recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3)/(4pisigma)(2) seconds where mu is the mean period of an oscillator in seconds and sigma(2) its variance in seconds(2). We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum) and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

  15. Nuclear entropy, angular second moment, variance and texture correlation of thymus cortical and medullar lymphocytes: Grey level co-occurrence matrix analysis

    Directory of Open Access Journals (Sweden)

    IGOR PANTIC

    2013-09-01

    Full Text Available Grey level co-occurrence matrix analysis (GLCM is a well-known mathematical method for quantification of cell and tissue textural properties, such as homogeneity, complexity and level of disorder. Recently, it was demonstrated that this method is capable of evaluating fine structural changes in nuclear structure that otherwise are undetectable during standard microscopy analysis. In this article, we present the results indicating that entropy, angular second moment, variance, and texture correlation of lymphocyte nuclear structure determined by GLCM method are different in thymus cortex when compared to medulla. A total of 300 thymus lymphocyte nuclei from 10 one-month-old mice were analyzed: 150 nuclei from cortex and 150 nuclei from medullar regions of thymus. Nuclear GLCM analysis was carried out using National Institutes of Health ImageJ software. For each nucleus, entropy, angular second moment, variance and texture correlation were determined. Cortical lymphocytes had significantly higher chromatin angular second moment (p < 0.001 and texture correlation (p < 0.05 compared to medullar lymphocytes. Nuclear GLCM entropy and variance of cortical lymphocytes were on the other hand significantly lower than in medullar lymphocytes (p < 0.001. These results suggest that GLCM as a method might have a certain potential in detecting discrete changes in nuclear structure associated with lymphocyte migration and maturation in thymus.

  16. Nuclear entropy, angular second moment, variance and texture correlation of thymus cortical and medullar lymphocytes: grey level co-occurrence matrix analysis.

    Science.gov (United States)

    Pantic, Igor; Pantic, Senka; Paunovic, Jovana; Perovic, Milan

    2013-09-01

    Grey level co-occurrence matrix analysis (GLCM) is a well-known mathematical method for quantification of cell and tissue textural properties, such as homogeneity, complexity and level of disorder. Recently, it was demonstrated that this method is capable of evaluating fine structural changes in nuclear structure that otherwise are undetectable during standard microscopy analysis. In this article, we present the results indicating that entropy, angular second moment, variance, and texture correlation of lymphocyte nuclear structure determined by GLCM method are different in thymus cortex when compared to medulla. A total of 300 thymus lymphocyte nuclei from 10 one-month-old mice were analyzed: 150 nuclei from cortex and 150 nuclei from medullar regions of thymus. Nuclear GLCM analysis was carried out using National Institutes of Health ImageJ software. For each nucleus, entropy, angular second moment, variance and texture correlation were determined. Cortical lymphocytes had significantly higher chromatin angular second moment (p GLCM entropy and variance of cortical lymphocytes were on the other hand significantly lower than in medullar lymphocytes (p GLCM as a method might have a certain potential in detecting discrete changes in nuclear structure associated with lymphocyte migration and maturation in thymus.

  17. 40 CFR 142.43 - Disposition of a variance request.

    Science.gov (United States)

    2010-07-01

    ... during the period of variance shall specify interim treatment techniques, methods and equipment, and... the specified treatment technique for which the variance was granted is necessary to protect...

  18. The Theory of Variances in Equilibrium Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  19. Heritabilities of ego strength (factor C), super ego strength (factor G), and self-sentiment (factor Q3) by multiple abstract variance analysis.

    Science.gov (United States)

    Cattell, R B; Schuerger, J M; Klein, T W

    1982-10-01

    Tested over 3,000 boys (identical and fraternal twins, ordinary sibs, general population) aged 12-18 on Ego Strength, Super Ego Strength, and Self Sentiment. The Multiple Abstract Variance Analysis (MAVA) method was used to obtain estimates of abstract (hereditary, environmental) variances and covariances that contribute to total variation in the three traits. Within-family heritabilities for these traits were about .30, .05, and .65. Between-family heritabilities were .60, .08, and .45. Within-family correlations of genetic and environmental deviations were trivial, unusually so among personality variables, but between-family values showed the usual high negative values, consistent with the law of coercion to the biosocial mean.

  20. Fractional constant elasticity of variance model

    OpenAIRE

    Ngai Hang Chan; Chi Tim Ng

    2007-01-01

    This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.

  1. CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset

    NARCIS (Netherlands)

    Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.

    2012-01-01

    We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var

  2. Measurement and modeling of acid dissociation constants of tri-peptides containing Glu, Gly, and His using potentiometry and generalized multiplicative analysis of variance.

    Science.gov (United States)

    Khoury, Rima Raffoul; Sutton, Gordon J; Hibbert, D Brynn; Ebrahimi, Diako

    2013-02-28

    We report pK(a) values with measurement uncertainties for all labile protons of the 27 tri-peptides prepared from the amino acids glutamic acid (E), glycine (G) and histidine (H). Each tri-peptide (GGG, GGE, GGH, …, HHH) was subjected to alkali titration and pK(a) values were calculated from triplicate potentiometric titrations data using HyperQuad 2008 software. A generalized multiplicative analysis of variance (GEMANOVA) of pK(a) values for the most acidic proton gave the optimum model having two terms, an interaction between the end amino acids plus an isolated main effect of the central amino acid.

  3. Meta-analysis with missing study-level sample variance data.

    Science.gov (United States)

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd.

  4. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    Science.gov (United States)

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  5. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  6. Using a variance-based sensitivity analysis for analyzing the relation between measurements and unknown parameters of a physical model

    Science.gov (United States)

    Zhao, J.; Tiede, C.

    2011-05-01

    An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.

  7. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative s

  8. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    Science.gov (United States)

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations.

  9. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    Science.gov (United States)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  10. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    . Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...

  11. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    Science.gov (United States)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  12. Advanced methods of analysis variance on scenarios of nuclear prospective; Metodos avanzados de analisis de varianza en escenarios de prospectiva nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-07-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  13. A Multiband Generalization of the Analysis of Variance Period Estimation Algorithm and the Effect of Inter-band Observing Cadence on Period Recovery Rate

    CERN Document Server

    Mondrik, Nicholas; Marshall, Jennifer L

    2015-01-01

    We present a new method of extending the single band Analysis of Variance period estimation algorithm to multiple bands. We use SDSS Stripe 82 RR Lyrae to show that in the case of low number of observations per band and non-simultaneous observations, improvements in period recovery rates of up to $\\approx$60\\% are observed. We also investigate the effect of inter-band observing cadence on period recovery rates. We find that using non-simultaneous observation times between bands is ideal for the multiband method, and using simultaneous multiband data is only marginally better than using single band data. These results will be particularly useful in planning observing cadences for wide-field astronomical imaging surveys such as LSST. They also have the potential to improve the extraction of transient data from surveys with few ($\\lesssim 30$) observations per band across several bands, such as the Dark Energy Survey.

  14. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  15. A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del

    Institute of Scientific and Technical Information of China (English)

    Hou Ying-li; Liu Guo-xin; Jiang Chun-lan

    2015-01-01

    In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the efficient strategies and efficient frontier) are derived explicitly.

  16. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    Science.gov (United States)

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level, (2) stopping trypsin digestion with

  17. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

    2009-08-15

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  18. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  19. Analysis of variance, normal quantile-quantile correlation and effective expression support of pooled expression ratio of reference genes for defining expression stability.

    Science.gov (United States)

    Priyadarshi, Himanshu; Das, Rekha; Kumar, Shivendra; Kishore, Pankaj; Kumar, Sujit

    2017-01-01

    Identification of a reference gene unaffected by the experimental conditions is obligatory for accurate measurement of gene expression through relative quantification. Most existing methods directly analyze variability in crossing point (Cp) values of reference genes and fail to account for template-independent factors that affect Cp values in their estimates. We describe the use of three simple statistical methods namely analysis of variance (ANOVA), normal quantile-quantile correlation (NQQC) and effective expression support (EES), on pooled expression ratios of reference genes in a panel to overcome this issue. The pooling of expression ratios across the genes in the panel nullify the sample specific effects uniformly affecting all genes that are falsely reflected as instability. Our methods also offer the flexibility to include sample specific PCR efficiencies in estimations, when available, for improved accuracy. Additionally, we describe a correction factor from the ANOVA method to correct the relative fold change of a target gene if no truly stable reference gene could be found in the analyzed panel. The analysis is described on a synthetic data set to simplify the explanation of the statistical treatment of data.

  20. Excel应用于方差分析的实训教学探究%Teaching Inquiry on the Application of Excel in Variance Analysis

    Institute of Scientific and Technical Information of China (English)

    万海清

    2014-01-01

    With more practical teaching of domestic undergraduates’probability and statistics course focusing on the Excel, an operating platform for beginners, the variance analysis process of unbalanced data and nested structure data in the Excel was explored. The idea on correction number within the group after sorting data and simplified square sum decomposition formula was highlighted to make the data flow full formatting, the decomposition process more concise and the freedom decomposition and F test step less cumbersome in the variance analysis process of Excel, which provided a wide aperture mode that using Excel electronic form to realize practical teaching for non-statistics professional undergraduates’probability and statistics course.%针对国内应用型本科湖南文理学院院校概率统计课程的实训教学越来越多地注重Excel作为初学者操作平台的现状,就非平衡数据和嵌套结构的数据在Excel中实现方差分析的过程进行探究。突出以组内矫正数为主线整理数据后简化平方和分解公式的理念,使得Excel中进行方差分析时,数据整理流程完全格式化,平方和的分解步骤更为简洁,自由度分解、F检验等步骤不再繁琐,为非统计专业本科阶段借助Excel电子表格率先实现概率统计课程实训教学的宽口径模式提供借鉴。

  1. Regarding to the Variance Analysis of Regression Equation of the Surface Roughness obtained by End Milling process of 7136 Aluminium Alloy

    Science.gov (United States)

    POP, A. B.; ȚÎȚU, M. A.

    2016-11-01

    In the metal cutting process, surface quality is intrinsically related to the cutting parameters and to the cutting tool geometry. At the same time, metal cutting processes are closely related to the machining costs. The purpose of this paper is to reduce manufacturing costs and processing time. A study was made, based on the mathematical modelling of the average of the absolute value deviation (Ra) resulting from the end milling process on 7136 aluminium alloy, depending on cutting process parameters. The novel element brought by this paper is the 7136 aluminium alloy type, chosen to conduct the experiments, which is a material developed and patented by Universal Alloy Corporation. This aluminium alloy is used in the aircraft industry to make parts from extruded profiles, and it has not been studied for the proposed research direction. Based on this research, a mathematical model of surface roughness Ra was established according to the cutting parameters studied in a set experimental field. A regression analysis was performed, which identified the quantitative relationships between cutting parameters and the surface roughness. Using the variance analysis ANOVA, the degree of confidence for the achieved results by the regression equation was determined, and the suitability of this equation at every point of the experimental field.

  2. Understanding the influence of watershed storage caused by human interferences on ET variance

    Science.gov (United States)

    Zeng, R.; Cai, X.

    2014-12-01

    Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.

  3. Dimension reduction in heterogeneous neural networks: Generalized Polynomial Chaos (gPC) and ANalysis-Of-VAriance (ANOVA)

    Science.gov (United States)

    Choi, M.; Bertalan, T.; Laing, C. R.; Kevrekidis, I. G.

    2016-09-01

    We propose, and illustrate via a neural network example, two different approaches to coarse-graining large heterogeneous networks. Both approaches are inspired from, and use tools developed in, methods for uncertainty quantification (UQ) in systems with multiple uncertain parameters - in our case, the parameters are heterogeneously distributed on the network nodes. The approach shows promise in accelerating large scale network simulations as well as coarse-grained fixed point, periodic solution computation and stability analysis. We also demonstrate that the approach can successfully deal with structural as well as intrinsic heterogeneities.

  4. Optimization of radio astronomical observations using Allan variance measurements

    CERN Document Server

    Schieder, R

    2001-01-01

    Stability tests based on the Allan variance method have become a standard procedure for the evaluation of the quality of radio-astronomical instrumentation. They are very simple and simulate the situation when detecting weak signals buried in large noise fluctuations. For the special conditions during observations an outline of the basic properties of the Allan variance is given, and some guidelines how to interpret the results of the measurements are presented. Based on a rather simple mathematical treatment clear rules for observations in ``Position-Switch'', ``Beam-'' or ``Frequency-Switch'', ``On-The-Fly-'' and ``Raster-Mapping'' mode are derived. Also, a simple ``rule of the thumb'' for an estimate of the optimum timing for the observations is found. The analysis leads to a conclusive strategy how to plan radio-astronomical observations. Particularly for air- and space-borne observatories it is very important to determine, how the extremely precious observing time can be used with maximum efficiency. The...

  5. Variance-component analysis of obesity in Type 2 Diabetes confirms loci on chromosomes 1q and 11q

    NARCIS (Netherlands)

    Haeften, T.W. van; Pearson, P.L.; Tilburg, J.H.O. van; Strengman, E.; Sandkuijl, L.A.; Wijmenga, C.

    2003-01-01

    To study genetic loci influencing obesity in nuclear families with type 2 diabetes, we performed a genome-wide screen with 325 microsatellite markers that had an average spacing of 11 cM and a mean heterozygosity of ~75% covering all 22 autosomes. Genotype data were obtained from 562 individuals fro

  6. Multivariate Analysis of Variance: Finding significant growth in mice with craniofacial dysmorphology caused by the Crouzon mutation

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Ólafsdóttir, Hildur; Darvann, Tron Andre;

    2010-01-01

    Crouzon syndrome is characterized by growth disturbances caused by premature fusion of the cranial growth zones. A mouse model with mutation Fgfr2C342Y, equivalent to the most common Crouzon syndrome mutation (henceforth called the Crouzon mouse model), has a phenotype showing many parallels...... used micro-CT scans of 4-week-old mice (N=5) and 6-week-old mice (N=10) with Crouzon syndrome (Fgfr2 C342Y/+) were compared to control groups of 4-week-old wild-type mice (N=5) and 6-week-old wild-type mice (N=10), respectively....

  7. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  8. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  9. Genetic variance of tolerance and the toxicant threshold model.

    Science.gov (United States)

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.

  10. Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.

    Science.gov (United States)

    Ashby, Neil; Patla, Bijunath

    2016-04-01

    Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.

  11. Research on variance of subnets in network sampling

    Institute of Scientific and Technical Information of China (English)

    Qi Gao; Xiaoting Li; Feng Pan

    2014-01-01

    In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.

  12. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  13. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  14. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance

    NARCIS (Netherlands)

    Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.

    2009-01-01

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam

  15. Confidence Intervals of Variance Functions in Generalized Linear Model

    Institute of Scientific and Technical Information of China (English)

    Yong Zhou; Dao-ji Li

    2006-01-01

    In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.

  16. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  17. Heritable Environmental Variance Causes Nonlinear Relationships Between Traits: Application to Birth Weight and Stillbirth of Pigs

    NARCIS (Netherlands)

    Mulder, H.A.; Hill, W.G.; Knol, E.F.

    2015-01-01

    There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of oth

  18. MULTILEVEL MODELING OF THE PERFORMANCE VARIANCE

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  19. Variance of indoor radon concentration: Major influencing factors.

    Science.gov (United States)

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.

  20. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  1. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    Science.gov (United States)

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  2. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    Science.gov (United States)

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  3. Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model

    OpenAIRE

    Leunglung Chan; Eckhard Platen

    2015-01-01

    This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.

  4. Variance of partial sums of stationary sequences

    CERN Document Server

    Deligiannidis, George

    2012-01-01

    Let $X_1, X_2,...$ be a centred sequence of weakly stationary random variables with spectral measure $F$ and partial sums $S_n = X_1 +...+ X_n$, and let $G(x) = \\int_{-x}^x F(\\rd x)$. We show that $\\var(S_n)$ is regularly varying of index $\\gamma$ at infinity, if and only if $G(x)$ is regularly varying of index $2-\\gamma$ at the origin ($0<\\gamma<2$).

  5. Image embedded coding with edge preservation based on local variance analysis for mobile applications

    Science.gov (United States)

    Luo, Gaoyong; Osypiw, David

    2006-02-01

    Transmitting digital images via mobile device is often subject to bandwidth which are incompatible with high data rates. Embedded coding for progressive image transmission has recently gained popularity in image compression community. However, current progressive wavelet-based image coders tend to send information on the lowest-frequency wavelet coefficients first. At very low bit rates, images compressed are therefore dominated by low frequency information, where high frequency components belonging to edges are lost leading to blurring the signal features. This paper presents a new image coder employing edge preservation based on local variance analysis to improve the visual appearance and recognizability of compressed images. The analysis and compression is performed by dividing an image into blocks. Fast lifting wavelet transform is developed with the advantages of being computationally efficient and boundary effects minimized by changing wavelet shape for handling filtering near the boundaries. A modified SPIHT algorithm with more bits used to encode the wavelet coefficients and transmitting fewer bits in the sorting pass for performance improvement, is implemented to reduce the correlation of the coefficients at scalable bit rates. Local variance estimation and edge strength measurement can effectively determine the best bit allocation for each block to preserve the local features by assigning more bits for blocks containing more edges with higher variance and edge strength. Experimental results demonstrate that the method performs well both visually and in terms of MSE and PSNR. The proposed image coder provides a potential solution with parallel computation and less memory requirements for mobile applications.

  6. Mean and variance of coincidence counting with deadtime

    CERN Document Server

    Yu, D F

    2002-01-01

    We analyze the first and second moments of the coincidence-counting process for a system affected by paralyzable (extendable) deadtime with (possibly unequal) deadtimes in each singles channel. We consider both 'accidental' and 'genuine' coincidences, and derive exact analytical expressions for the first and second moments of the number of recorded coincidence events under various scenarios. The results include an exact form for the coincidence rate under the combined effects of decay, background, and deadtime. The analysis confirms that coincidence counts are not exactly Poisson, but suggests that the Poisson statistical model that is used for positron emission tomography image reconstruction is a reasonable approximation since the mean and variance are nearly equal.

  7. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  8. Data Warehouse Designs Achieving ROI with Market Basket Analysis and Time Variance

    CERN Document Server

    Silvers, Fon

    2011-01-01

    Market Basket Analysis (MBA) provides the ability to continually monitor the affinities of a business and can help an organization achieve a key competitive advantage. Time Variant data enables data warehouses to directly associate events in the past with the participants in each individual event. In the past however, the use of these powerful tools in tandem led to performance degradation and resulted in unactionable and even damaging information. Data Warehouse Designs: Achieving ROI with Market Basket Analysis and Time Variance presents an innovative, soup-to-nuts approach that successfully

  9. Time Variance of the Suspension Nonlinearity

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.; Pedersen, Bo Rohde

    2008-01-01

    . This paper investigates the changes in compliance the driving signal can cause, this includes low level short duration measurements of the resonance frequency as well as high power long duration measurements of the non-linearity of the suspension. It is found that at low levels the suspension softens...

  10. Partitioning of genomic variance using biological pathways

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    for complex diseases reveals patterns that provide insight into the genetic architecture of complex traits. Although many genetic variants with small or moderate effects contribute to the overall genetic variation, it appears that multiple independently associated variants are located in the same genes...... diseases. However, the variants identified as being statistically significant have generally explained only a small fraction of the heritable component of the trait. Insufficient modelling of the underlying genetic architecture may in part explain this missing heritability. Evidence collected across GWAS...

  11. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    Science.gov (United States)

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

  12. Is the ANOVA F-Test Robust to Variance Heterogeneity When Sample Sizes are Equal?: An Investigation via a Coefficient of Variation

    Science.gov (United States)

    Rogan, Joanne C.; Keselman, H. J.

    1977-01-01

    The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)

  13. Saturation of number variance in embedded random-matrix ensembles

    Science.gov (United States)

    Prakash, Ravi; Pandey, Akhilesh

    2016-05-01

    We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.

  14. Saturation of number variance in embedded random-matrix ensembles.

    Science.gov (United States)

    Prakash, Ravi; Pandey, Akhilesh

    2016-05-01

    We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.

  15. The positioning algorithm based on feature variance of billet character

    Science.gov (United States)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  16. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  17. Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues.

    Science.gov (United States)

    Yang, M; Virshup, G; Clayton, J; Zhu, X R; Mohan, R; Dong, L

    2010-03-07

    We discovered an empirical relationship between the logarithm of mean excitation energy (ln Im) and the effective atomic number (EAN) of human tissues, which allows for computing patient-specific proton stopping power ratios (SPRs) using dual-energy CT (DECT) imaging. The accuracy of the DECT method was evaluated for 'standard' human tissues as well as their variance. The DECT method was compared to the existing standard clinical practice-a procedure introduced by Schneider et al at the Paul Scherrer Institute (the stoichiometric calibration method). In this simulation study, SPRs were derived from calculated CT numbers of known material compositions, rather than from measurement. For standard human tissues, both methods achieved good accuracy with the root-mean-square (RMS) error well below 1%. For human tissues with small perturbations from standard human tissue compositions, the DECT method was shown to be less sensitive than the stoichiometric calibration method. The RMS error remained below 1% for most cases using the DECT method, which implies that the DECT method might be more suitable for measuring patient-specific tissue compositions to improve the accuracy of treatment planning for charged particle therapy. In this study, the effects of CT imaging artifacts due to the beam hardening effect, scatter, noise, patient movement, etc were not analyzed. The true potential of the DECT method achieved in theoretical conditions may not be fully achievable in clinical settings. Further research and development may be needed to take advantage of the DECT method to characterize individual human tissues.

  18. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  19. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  20. Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise

    Institute of Scientific and Technical Information of China (English)

    Donghui Li; Li Guo

    2006-01-01

    @@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.

  1. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    Science.gov (United States)

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.

  2. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...

  3. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Science.gov (United States)

    2010-04-28

    ... the drum.\\3\\ \\3\\ This variance adopts the definition of, and specifications for, fleet angle from... definition of ``static drop test'' specified by section 3 (``Definitions'') and the static drop test... FURTHER INFORMATION CONTACT: General information and press inquiries. For general information and...

  4. 基于动态Allan方差的光纤陀螺动态特性分析%Analysis of dynamic characteristics of a fiber-optic gyroscope based on dynamic Allan variance

    Institute of Scientific and Technical Information of China (English)

    李绪友; 张娜

    2011-01-01

    In order to study the dynamic characteristics of a fiber-optic gyroscope, it was proposed that the dynamic error which had been obtained by testing a fiber-optic gyroscope be analyzed by dynamic Allan variance. According to the principle of window function in the dynamic Allan variance method, the analysis results of the dynamic error were discussed under the different window length conditions. Furthermore, a kind of single sway movement and two kinds of composite sway movement were analyzed by the dynamic Allan variance method and their results were provided. There was fluctuating variation of variance in the analysis figures, by which the various non-stationary factors in the dynamic error such as mutation and periodic variation were accurately reflected, and the different sway states hidden in the dynamic errors were clearly identified. Both the theoretical analysis and experimental results indicate that the dynamic Allan variance method is very applicable for analyzing the dynamic characteristics of a fiber-optic gyroscope.%为了研究光纤陀螺的动态特性,提出采用动态Allan方差法对光纤陀螺测试得到的动态误差进行分析.根据动态Allan方差法中窗函数的原理,讨论了不同窗口长度对动态误差分析结果的影响.并且,提供了单一摇摆运动和两种复合摇摆运动的分析结果.从分析图中方差的起伏变化可以看出,动态Allan方差法可以准确地反映动态误差里的突变和周期性变化等非稳定性因素,能够清晰地辨识出隐藏在动态误差里的不同摇摆状态.由理论分析和实验结果可知,动态Al-lan方差法对光纤陀螺的动态特性分析是非常适用的.

  5. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    Science.gov (United States)

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic

  6. Statistical test of reproducibility and operator variance in thin-section modal analysis of textures and phenocrysts in the Topopah Spring member, drill hole USW VH-2, Crater Flat, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Moore, L.M.; Byers, F.M. Jr.; Broxton, D.E.

    1989-06-01

    A thin-section operator-variance test was given to the 2 junior authors, petrographers, by the senior author, a statistician, using 16 thin sections cut from core plugs drilled by the US Geological Survey from drill hole USW VH-2 standard (HCQ) drill core. The thin sections are samples of Topopah Spring devitrified rhyolite tuff from four textural zones, in ascending order: (1) lower nonlithophysal, (2) lower lithopysal, (3) middle nonlithophysal, and (4) upper lithophysal. Drill hole USW-VH-2 is near the center of the Crater Flat, about 6 miles WSW of the Yucca Mountain in Exploration Block. The original thin-section labels were opaqued out with removable enamel and renumbered with alpha-numeric labels. The sliders were then given to the petrographer operators for quantitative thin-section modal (point-count) analysis of cryptocrystalline, spherulitic, granophyric, and void textures, as well as phenocryst minerals. Between operator variance was tested by giving the two petrographers the same slide, and within-operator variance was tested by the same operator the same slide to count in a second test set, administered at least three months after the first set. Both operators were unaware that they were receiving the same slide to recount. 14 figs., 6 tabs.

  7. Predicting Risk Sensitivity in Humans and Lower Animals: Risk as Variance or Coefficient of Variation

    Science.gov (United States)

    Weber, Elke U.; Shafir, Sharoni; Blais, Ann-Renee

    2004-01-01

    This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk…

  8. Recombining binomial tree for constant elasticity of variance process

    OpenAIRE

    Hi Jun Choe; Jeong Ho Chu; So Jeong Shin

    2014-01-01

    The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...

  9. Explaining the Prevalence, Scaling and Variance of Urban Phenomena

    CERN Document Server

    Gomez-Lievano, Andres; Hausmann, Ricardo

    2016-01-01

    The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.

  10. The return of the variance: intraspecific variability in community ecology.

    Science.gov (United States)

    Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie

    2012-04-01

    Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.

  11. Convergence of Recursive Identification for ARMAX Process with Increasing Variances

    Institute of Scientific and Technical Information of China (English)

    JIN Ya; LUO Guiming

    2007-01-01

    The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.

  12. VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM

    OpenAIRE

    RANJU KANWAR; SAMEKSHA BHASKAR

    2013-01-01

    In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through th...

  13. Variance optimal sampling based estimation of subset sums

    CERN Document Server

    Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

  14. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  15. Avoiding Aliasing in Allan Variance: an Application to Fiber Link Data Analysis

    CERN Document Server

    Calosso, Claudio E; Micalizio, Salvatore

    2015-01-01

    Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284 km coherent optical link for frequency dissemination, which we realized in Italy.

  16. Identification and quantification of peptides and proteins secreted from prostate epithelial cells by unbiased liquid chromatography tandem mass spectrometry using goodness of fit and analysis of variance.

    Science.gov (United States)

    Florentinus, Angelica K; Bowden, Peter; Sardana, Girish; Diamandis, Eleftherios P; Marshall, John G

    2012-02-01

    The proteins secreted by prostate cancer cells (PC3(AR)6) were separated by strong anion exchange chromatography, digested with trypsin and analyzed by unbiased liquid chromatography tandem mass spectrometry with an ion trap. The spectra were matched to peptides within proteins using a goodness of fit algorithm that showed a low false positive rate. The parent ions for MS/MS were randomly and independently sampled from a log-normal population and therefore could be analyzed by ANOVA. Normal distribution analysis confirmed that the parent and fragment ion intensity distributions were sampled over 99.9% of their range that was above the background noise. Arranging the ion intensity data with the identified peptide and protein sequences in structured query language (SQL) permitted the quantification of ion intensity across treatments, proteins and peptides. The intensity of 101,905 fragment ions from 1421 peptide precursors of 583 peptides from 233 proteins separated over 11 sample treatments were computed together in one ANOVA model using the statistical analysis system (SAS) prior to Tukey-Kramer honestly significant difference (HSD) testing. Thus complex mixtures of proteins were identified and quantified with a high degree of confidence using an ion trap without isotopic labels, multivariate analysis or comparing chromatographic retention times.

  17. 概化理论方差分量估计的跨分布分析%Analysis of Cross-distribution for Estimating Variance Components in Generalizability Theory

    Institute of Scientific and Technical Information of China (English)

    黎光明; 张敏强

    2012-01-01

    distribution has an effect on the method of estimating variance components for generalizability theory. Those methods, which can be applied for normal data distribution, could not be applied for other distribution data such as dichotomous data distribution and polytomous data distribution. Data distribution imposes restrictions on estimating variance components for these four methods. So different methods need be distinguished to use to do a good analysis of cross-distribution for estimating variance components in generalizability theory

  18. Regional flood frequency analysis based on a Weibull model: Part 1. Estimation and asymptotic variances

    Science.gov (United States)

    Heo, Jun-Haeng; Boes, D. C.; Salas, J. D.

    2001-02-01

    Parameter estimation in a regional flood frequency setting, based on a Weibull model, is revisited. A two parameter Weibull distribution at each site, with common shape parameter over sites that is rationalized by a flood index assumption, and with independence in space and time, is assumed. The estimation techniques of method of moments and method of probability weighted moments are studied by proposing a family of estimators for each technique and deriving the asymptotic variance of each estimator. Then a single estimator and its asymptotic variance for each technique, suggested by trying to minimize the asymptotic variance over the family of estimators, is obtained. These asymptotic variances are compared to the Cramer-Rao Lower Bound, which is known to be the asymptotic variance of the maximum likelihood estimator. A companion paper considers the application of this model and these estimation techniques to a real data set. It includes a simulation study designed to indicate the sample size required for compatibility of the asymptotic results to fixed sample sizes.

  19. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...... of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...

  20. Reduction of variance in measurements of average metabolite concentration in anatomically-defined brain regions

    Science.gov (United States)

    Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki

    2016-11-01

    Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.

  1. Critical points of multidimensional random Fourier series: Variance estimates

    Science.gov (United States)

    Nicolaescu, Liviu I.

    2016-08-01

    We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.

  2. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    points in P. This problem is a crucial part of modern ecological analyses; each point in P represents a species in d-dimensional trait space, and the goal is to compute the statistics of a geometric measure on this trait space, when subsets of species are selected under random processes. We present...... efficient exact algorithms for computing the mean and variance of several geometric measures when point sets are selected under one of the described random distributions. More specifically, we provide algorithms for the following measures: the bounding box volume, the convex hull volume, the mean pairwise...

  3. Genetically controlled environmental variance for sternopleural bristles in Drosophila melanogaster - an experimental test of a heterogeneous variance model

    DEFF Research Database (Denmark)

    Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker

    2007-01-01

    quantitative genetics model based on the infinitesimal model, and an extension of this model. In the extended model it is assumed that each individual has its own environmental variance and that this heterogeneity of variance has a genetic component. The heterogeneous variance model was favoured by the data......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...

  4. Local orbitals by minimizing powers of the orbital variance

    DEFF Research Database (Denmark)

    Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;

    2011-01-01

    It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may...

  5. Discrete Time Mean-variance Analysis with Singular Second Moment Matrixes and an Exogenous Liability

    Institute of Scientific and Technical Information of China (English)

    Wen Cai CHEN; Zhong Xing YE

    2008-01-01

    We apply the dynamic programming methods to compute the analytical solution of the dynamic mean-variance optimization problem a.ected by an exogenous liability in a multi-periods market model with singular second moment matrixes of the return vector of assets. We use orthogonal transformations to overcome the difficulty produced by those singular matrixes, and the analytical form of the e.cient frontier is obtained. As an application, the explicit form of the optimal mean-variance hedging strategy is also obtained for our model.

  6. Cosmic variance of the galaxy cluster weak lensing signal

    CERN Document Server

    Gruen, D; Becker, M R; Friedrich, O; Mana, A

    2015-01-01

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_200m=10^14...10^15 h^-1 M_sol, z=0.25...0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ~20 per cent uncertainty from cosmic variance alone at M_200m=10^15 h^-1 M_sol and z=0.25), but significant also...

  7. Employing components-of-variance to evaluate forensic breath test instruments.

    Science.gov (United States)

    Gullberg, Rod G

    2008-03-01

    The evaluation of breath alcohol instruments for forensic suitability generally includes the assessment of accuracy, precision, linearity, blood/breath comparisons, etc. Although relevant and important, these methods fail to evaluate other important analytical and biological components related to measurement variability. An experimental design comparing different instruments measuring replicate breath samples from several subjects is presented here. Three volunteers provided n = 10 breath samples into each of six different instruments within an 18 minute time period. Two-way analysis of variance was employed which quantified the between-instrument effect and the subject/instrument interaction. Variance contributions were also determined for the analytical and biological components. Significant between-instrument and subject/instrument interaction were observed. The biological component of total variance ranged from 56% to 98% among all subject instrument combinations. Such a design can help quantify the influence of and optimize breath sampling parameters that will reduce total measurement variability and enhance overall forensic confidence.

  8. Estimation of measurement variance in the context of environment statistics

    Science.gov (United States)

    Maiti, Pulakesh

    2015-02-01

    The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

  9. Worldwide variance in the potential utilization of Gamma Knife radiosurgery.

    Science.gov (United States)

    Hamilton, Travis; Dade Lunsford, L

    2016-12-01

    OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.

  10. VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM

    Directory of Open Access Journals (Sweden)

    RANJU KANWAR

    2013-04-01

    Full Text Available In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through this work, it is investigated that for longer transmission distance, 40-Gb/s systems are more sensitive to nonlinear phase noise as compared to 50-Gb/s systems. Also, when transmitting the data through the fiber optic link, bit errors are produced due to various effects such as noise from optical amplifiers and nonlinearity occurring in fiber. On the basis of the simulation results , we have compared the bit error rate based on 8-PSK with theoretical results, and result shows that in real time approach, the bit error rate is high for the same signal to noise ratio. MATLAB software is used to validate the analytical expressions for the variance of nonlinear phase noise.

  11. Effect of window shape on the detection of hyperuniformity via the local number variance

    Science.gov (United States)

    Kim, Jaeuk; Torquato, Salvatore

    2017-01-01

    Hyperuniform many-particle systems in d-dimensional space {{{R}}d} , which includes crystals, quasicrystals, and some exotic disordered systems, are characterized by an anomalous suppression of density fluctuations at large length scales such that the local number variance within a ‘spherical’ observation window grows slower than the window volume. In usual circumstances, this direct-space condition is equivalent to the Fourier-space hyperuniformity condition that the structure factor vanishes as the wavenumber goes to zero. In this paper, we comprehensively study the effect of aspherical window shapes with characteristic size L on the direct-space condition for hyperuniform systems. For lattices, we demonstrate that the variance growth rate can depend on the shape as well as the orientation of the windows, and in some cases, the growth rate can be faster than the window volume (i.e. L d ), which may lead one to falsely conclude that the system is non-hyperuniform solely according to the direct-space condition. We begin by numerically investigating the variance of two-dimensional lattices using ‘superdisk’ windows, whose convex shapes continuously interpolate between circles (p  =  1) and squares (p\\to ∞ ), as prescribed by a deformation parameter p, when the superdisk symmetry axis is aligned with the lattice. Subsequently, we analyze the variance for lattices as a function of the window orientation, especially for two-dimensional lattices using square windows (superdisk when p\\to ∞ ). Based on this analysis, we explain the reason why the variance for d  =  2 can grow faster than the window area or even slower than the window perimeter (e.g. like \\ln (L) ). We then study the generalized condition of the window orientation, under which the variance can grow as fast as or faster than L d (window volume), to the case of Bravais lattices and parallelepiped windows in {{{R}}d} . In the case of isotropic disordered hyperuniform systems, we

  12. Analysis of enterococci using portable testing equipment for developing countries--variance of Azide NutriDisk medium under variable time and temperature.

    Science.gov (United States)

    Godfrey, S; Watkins, J; Toop, K; Francis, C

    2006-01-01

    This report compares the enterococci count on samples obtained with Azide NutriDisk (AND) (sterile, dehydrated culture medium) and Slanetz and Bartley (SB) medium when exposed to a variable in incubation time and temperature. Three experiments were performed to examine the recovery of enterococci on AND and SB media using membrane filtration with respect to: (a) incubation time; (b) incubation temperature; and (c) a combination of the two. Presumptive counts were observed at 37, 41, 46 and 47 degrees C and at 20, 24, 28 and 48 h. These were compared to AWWA standard method 9230 C (44 degrees C, 44 h). Samples were confirmed using Kanamycin Aesculin Azide (KAA) agar. Friedman's ANOVA and Students t-test analysis indicated higher enumeration of enterococci when grown on AND (p = 0.45) than SB (p = < 0.001) at all temperatures with a survival threshold at 47 degrees C. Significant results for AND medium were noted at 20 h (p = 0.021), 24 h (p = 0.278) and 28 h (p = 0.543). The study concluded that the accuracy of the AND medium at a greater time and temperature range provided flexibility in incubator technology making it an appropriate alternative to SB medium for monitoring drinking water using field testing kits in developing countries.

  13. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  14. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  15. Enhancing melting curve analysis for the discrimination of loop-mediated isothermal amplification products from four pathogenic molds: Use of inorganic pyrophosphatase and its effect in reducing the variance in melting temperature values.

    Science.gov (United States)

    Tone, Kazuya; Fujisaki, Ryuichi; Yamazaki, Takashi; Makimura, Koichi

    2017-01-01

    Loop-mediated isothermal amplification (LAMP) is widely used for differentiating causative agents in infectious diseases. Melting curve analysis (MCA) in conjunction with the LAMP method reduces both the labor required to conduct an assay and contamination of the products. However, two factors influence the melting temperature (Tm) of LAMP products: an inconsistent concentration of Mg(2+) ion due to the precipitation of Mg2P2O7, and the guanine-cytosine (GC) content of the starting dumbbell-like structure. In this study, we investigated the influence of inorganic pyrophosphatase (PPase), an enzyme that inhibits the production of Mg2P2O7, on the Tm of LAMP products, and examined the correlation between the above factors and the Tm value using MCA. A set of LAMP primers that amplify the ribosomal DNA of the large subunit of Aspergillus fumigatus, Penicillium expansum, Penicillium marneffei, and Histoplasma capsulatum was designed, and the LAMP reaction was performed using serial concentrations of these fungal genomic DNAs as templates in the presence and absence of PPase. We compared the Tm values obtained from the PPase-free group and the PPase-containing group, and the relationship between the GC content of the theoretical starting dumbbell-like structure and the Tm values of the LAMP product from each fungus was analyzed. The range of Tm values obtained for several fungi overlapped in the PPase-free group. In contrast, in the PPase-containing group, the variance in Tm values was smaller and there was no overlap in the Tm values obtained for all fungi tested: the LAMP product of each fungus had a specific Tm value, and the average Tm value increased as the GC% of the starting dumbbell-like structure increased. The use of PPase therefore reduced the variance in the Tm value and allowed the differentiation of these pathogenic fungi using the MCA method.

  16. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  17. Estimation of genetic variation in residual variance in female and male broiler chickens

    NARCIS (Netherlands)

    Mulder, H.A.; Hill, W.G.; Vereijken, A.; Veerkamp, R.F.

    2009-01-01

    In breeding programs, robustness of animals and uniformity of end product can be improved by exploiting genetic variation in residual variance. Residual variance can be defined as environmental variance after accounting for all identifiable effects. The aims of this study were to estimate genetic va

  18. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  19. Concept design theory and model for multi-use space facilities: Analysis of key system design parameters through variance of mission requirements

    Science.gov (United States)

    Reynerson, Charles Martin

    This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.

  20. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen;

    2016-01-01

    relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. METHODS: In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed....... The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components....

  1. The ALHAMBRA survey : Estimation of the clustering signal encoded in the cosmic variance

    CERN Document Server

    López-Sanjuan, C; Hernández-Monteagudo, C; Arnalte-Mur, P; Varela, J; Viironen, K; Fernández-Soto, A; Martínez, V J; Alfaro, E; Ascaso, B; del Olmo, A; Díaz-García, L A; Hurtado-Gil, Ll; Moles, M; Molino, A; Perea, J; Pović, M; Aguerri, J A L; Aparicio-Villegas, T; Benítez, N; Broadhurst, T; Cabrera-Caño, J; Castander, F J; Cepa, J; Cerviño, M; Cristóbal-Hornillos, D; Delgado, R M González; Husillos, C; Infante, L; Márquez, I; Masegosa, J; Prada, F; Quintana, J M

    2015-01-01

    The relative cosmic variance ($\\sigma_v$) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the $\\sigma_v$ measured in the ALHAMBRA survey. We measure the cosmic variance of several galaxy populations selected with $B-$band luminosity at $0.35 \\leq z < 1.05$ as the intrinsic dispersion in the number density distribution derived from the 48 ALHAMBRA subfields. We compare the observational $\\sigma_v$ with the cosmic variance of the dark matter expected from the theory, $\\sigma_{v,{\\rm dm}}$. This provides an estimation of the galaxy bias $b$. The galaxy bias from the cosmic variance is in excellent agreement with the bias estimated by two-point correlation function analysis in ALHAMBRA. This holds for different redshift bins, for red and blue subsamples, and for several ...

  2. Estimation models of variance components for farrowing interval in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante Neto

    2009-02-01

    Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

  3. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  4. Chromatic visualization of reflectivity variance within hybridized directional OCT images

    Science.gov (United States)

    Makhijani, Vikram S.; Roorda, Austin; Bayabo, Jan Kristine; Tong, Kevin K.; Rivera-Carpio, Carlos A.; Lujan, Brandon J.

    2013-03-01

    This study presents a new method of visualizing hybridized images of retinal spectral domain optical coherence tomography (SDOCT) data comprised of varied directional reflectivity. Due to the varying reflectivity of certain retinal structures relative to angle of incident light, SDOCT images obtained with differing entry positions result in nonequivalent images of corresponding cellular and extracellular structures, especially within layers containing photoreceptor components. Harnessing this property, cross-sectional pathologic and non-pathologic macular images were obtained from multiple pupil entry positions using commercially-available OCT systems, and custom segmentation, alignment, and hybridization algorithms were developed to chromatically visualize the composite variance of reflectivity effects. In these images, strong relative reflectivity from any given direction visualizes as relative intensity of its corresponding color channel. Evident in non-pathologic images was marked enhancement of Henle's fiber layer (HFL) visualization and varying reflectivity patterns of the inner limiting membrane (ILM) and photoreceptor inner/outer segment junctions (IS/OS). Pathologic images displayed similar and additional patterns. Such visualization may allow a more intuitive understanding of structural and physiologic processes in retinal pathologies.

  5. Cosmic variance and the measurement of the local Hubble parameter.

    Science.gov (United States)

    Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel

    2013-06-14

    There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.

  6. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  7. Identifiability of Gaussian Structural Equation Models with Same Error Variances

    CERN Document Server

    Peters, Jonas

    2012-01-01

    We consider structural equation models (SEMs) in which variables can be written as a function of their parents and noise terms (the latter are assumed to be jointly independent). Corresponding to each SEM, there is a directed acyclic graph (DAG) G_0 describing the relationships between the variables. In Gaussian SEMs with linear functions, the graph can be identified from the joint distribution only up to Markov equivalence classes (assuming faithfulness). It has been shown, however, that this constitutes an exceptional case. In the case of linear functions and non-Gaussian noise, the DAG becomes identifiable. Apart from few exceptions the same is true for non-linear functions and arbitrarily distributed additive noise. In this work, we prove identifiability for a third modification: if we require all noise variables to have the same variances, again, the DAG can be recovered from the joint Gaussian distribution. Our result can be applied to the problem of causal inference. If the data follow a Gaussian SEM w...

  8. Analysis of health trait data from on-farm computer systems in the U.S. I: Pedigree and genomic variance components estimation

    Science.gov (United States)

    With an emphasis on increasing profit through increased dairy cow production, a negative relationship with fitness traits such as fertility and health traits has become apparent. Decreased cow health can impact herd profitability through increased rates of involuntary culling and decreased or lost m...

  9. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics

    Science.gov (United States)

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  10. Variance Entropy: A Method for Characterizing Perceptual Awareness of Visual Stimulus

    Directory of Open Access Journals (Sweden)

    Meng Hu

    2012-01-01

    Full Text Available Entropy, as a complexity measure, is a fundamental concept for time series analysis. Among many methods, sample entropy (SampEn has emerged as a robust, powerful measure for quantifying complexity of time series due to its insensitivity to data length and its immunity to noise. Despite its popular use, SampEn is based on the standardized data where the variance is routinely discarded, which may nonetheless provide additional information for discriminant analysis. Here we designed a simple, yet efficient, complexity measure, namely variance entropy (VarEn, to integrate SampEn with variance to achieve effective discriminant analysis. We applied VarEn to analyze local field potential (LFP collected from visual cortex of macaque monkey while performing a generalized flash suppression task, in which a visual stimulus was dissociated from perceptual experience, to study neural complexity of perceptual awareness. We evaluated the performance of VarEn in comparison with SampEn on LFP, at both single and multiple scales, in discriminating different perceptual conditions. Our results showed that perceptual visibility could be differentiated by VarEn, with significantly better discriminative performance than SampEn. Our findings demonstrate that VarEn is a sensitive measure of perceptual visibility, and thus can be used to probe perceptual awareness of a stimulus.

  11. THE VARIANCE AND TREND OF INTEREST RATE – CASE OF COMMERCIAL BANKS IN KOSOVO

    Directory of Open Access Journals (Sweden)

    Fidane Spahija

    2015-09-01

    Full Text Available Today’s debate on the interest rate is characterized by three key issues: the interest rate as a phenomenon, the interest rate as a product of factors (dependent variable, and the interest rate as a policy instrument (independent variable. In this article, the variance in interest rates, as the dependent variable, comes in two statistical sizes: the variance and trend. The interest rates include the price of loans and deposits. The analysis of interest rates on deposits and loan is conducted for non-financial corporation and family economy. This study looks into a statistical analysis, to highlight the variance and trends of interest rates for the period 2004-2013, for deposits and loans in commercial banks in Kosovo. The interest rate is observed at various levels. Is it high, medium or low? Does it explain growth trends, keep constant, or reduce? The trend is observed whether commercial banks maintain, reduce, or increase the interest rate in response to the policy that follows the Central Bank of Kosovo. The data obtained will help to determine the impact of interest rate in the service sector, investment, consumption, and unemployment.

  12. Estimation of Variance Components for Litter Size in the First and Later Parities in Improved Jezersko-Solcava Sheep

    Directory of Open Access Journals (Sweden)

    Dubravko Škorput

    2011-12-01

    Full Text Available Aim of this study was to estimate variance components for litter size in Improved Jezersko-Solcava sheep. Analysis involved 66,082 records from 12,969 animals, for the number of lambs born in all parities (BA, the first parity (B1, and later parities (B2+. Fixed part of model contained the effects of season and age at lambing within parity. Random part of model contained the effects of herd, permanent effect (for repeatability models, and additive genetic effect. Variance components were estimated using the restricted maximum likelihood method. The average number of lambs born was 1.36 in the first parity, while the average in later parities was 1.59 leading also to about 20% higher variance. Several models were tested in order to accommodate markedly different variability in litter size between the first and later parities: single trait model (for BA, B1, and B2+, two-trait model (for B1 and B2+, and single trait model with heterogeneous residual variance (for BA. Comparison of variance components between models showed largest differences for the residual variance, resulting in parsimonious fit for a single trait model for BA with heterogeneous residual variance. Correlations among breeding values from different models were high and showed remarkable performance of the standard single trait repeatability model for BA.

  13. MAGNETIC VARIANCES AND PITCH-ANGLE SCATTERING TIMES UPSTREAM OF INTERPLANETARY SHOCKS

    Energy Technology Data Exchange (ETDEWEB)

    Perri, Silvia; Zimbardo, Gaetano, E-mail: silvia.perri@fis.unical.it, E-mail: gaetano.zimbardo@fis.unical.it [Dipartimento di Fisica, Universita della Calabria, Ponte P. Bucci, Cubo 31C, I-87036 Arcavacata di Rende (Italy)

    2012-07-20

    Recent observations of power-law time profiles of energetic particles accelerated at interplanetary shocks have shown the possibility of anomalous, superdiffusive transport for energetic particles throughout the heliosphere. Those findings call for an accurate investigation of the magnetic field fluctuation properties at the resonance frequencies upstream of the shock's fronts. Normalized magnetic field variances, indeed, play a crucial role in the determination of the pitch-angle scattering times and then of the transport regime. The present analysis investigates the time behavior of the normalized variances of the magnetic field fluctuations, measured by the Ulysses spacecraft upstream of corotating interaction region (CIR) shocks, for those events which exhibit superdiffusion for energetic electrons. We find a quasi-constant value for the normalized magnetic field variances from about 10 hr to 100 hr from the shock front. This rules out the presence of a varying diffusion coefficient and confirms the possibility of superdiffusion for energetic electrons. A statistical analysis of the scattering times obtained from the magnetic fluctuations upstream of the CIR events has also been performed; the resulting power-law distributions of scattering times imply long range correlations and weak pitch-angle scattering, and the power-law slopes are in qualitative agreement with superdiffusive processes described by a Levy random walk.

  14. Measurement of Allan variance and phase noise at fractions of a millihertz

    Science.gov (United States)

    Conroy, Bruce L.; Le, Duc

    1990-01-01

    Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.

  15. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pedersen, David Sloth; Pisani, Camilla

    2016-01-01

    of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...... of the implied volatility of variance smile -- some clearly at odds with the upward-sloping volatility skew observed in variance markets....

  16. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...... of the implied volatility of variance smile -- some clearly at odds with the upward-sloping volatility skew observed in variance markets....

  17. Comprehensive Study on the Estimation of the Variance Components of Traverse Nets

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper advances a new simplified formula for estimating variance components ,sums up the basic law to calculate the weights of observed values and a circulation method using the increaments of weights when estimating the variance components of traverse nets,advances the charicteristic roots method to estimate the variance components of traveres nets and presents a practical method to make two real and symmetric matrices two diagonal ones.

  18. Pipeline to assess the greatest source of technical variance in quantitative proteomics using metabolic labelling.

    Science.gov (United States)

    Russell, Matthew R; Lilley, Kathryn S

    2012-12-21

    The biological variance in protein expression of interest to biologists can only be accessed if the technical variance of the protein quantification method is low compared with the biological variance. Technical variance is dependent on the protocol employed within a quantitative proteomics experiment and accumulated with every additional step. The magnitude of additional variance incurred by each step of a protocol should be determined to enable design of experiments maximally sensitive to differential protein expression. Metabolic labelling techniques for MS based quantitative proteomics enable labelled and unlabelled samples to be combined at the tissue level. It has been widely assumed, although not yet empirically verified, that early combination of samples minimises technical variance in relative quantification. This study presents a pipeline to determine the variance incurred at each stage of a common quantitative proteomics protocol involving metabolic labelling. We apply this pipeline to determine whether early combination of samples in a protocol leads to significant reduction in experimental variance. We also identify which stage within the protocol is associated with maximum variance. This provides a blueprint by which the variance associated with each stage of any protocol can be dissected and utilised to influence optimal experimental design.

  19. Components of variance and heritability of resistance to important fungal diseases agents in grapevine

    Directory of Open Access Journals (Sweden)

    Nikolić Dragan

    2006-01-01

    Full Text Available In four interspecies crossing combinations of grapevine (Seedling 108 x Muscat Hamburg, Muscat Hamburg x Seedling 108, S.V.I8315 x Muscat Hamburg and Muscat Hamburg x S.V.I2375 during three years period, resistance to important fungal diseases agents (Plasmopara viticola and Botrytis cinerea were examined. Based on results of analysis of variance, for investigated characteristics, components of variance, coefficients of genetic and phenotypic variation and coefficient of heritability in a broader sense were calculated. It was established that for both characteristics and in all crossing combinations, genetic variance took the biggest part in total variability. The lowest coefficients of genetic and phenotypic variation were established for both properties in crossing combination Seedling 108 x Muscat Hamburg. The highest coefficients of genetic and phenotypic variation were determined for leaf resistance to Plasmopara viticola in crossing combination Muscat Hamburg x S.V.I2375, and for bunch resistance to Botrytis cinerea in crossing combination Muscat Hamburg x Seedling 108. Considering all investigated crossing combinations, coefficient of heritability for leaf resistance to Plasmopara viticola was from 87.23% to 94.88%, and for bunch resistance to Botrytis cinerea from 88.04% to 93.32%. .

  20. 基于均方差决策法的新疆旅游业投资环境分析%Analysis of Xinjiang Tourism Investment Environment Based on Method of Mean-squared Variance Decision

    Institute of Scientific and Technical Information of China (English)

    邓寓心; 李晓东

    2015-01-01

    Based on the theoretical analysis of the Xinjiang tourism investment environment, 26 indicators were selected to establish the evaluation index system from ifve aspects of tourism market environment, natural environment, economic environment, social culture and environment and infrastructure environment, using the method of mean-squared variance decision to collect the data standardization and data calculation. Finally the analysis and evaluation of Xinjiang tourism investment environment. Evaluation of every subsystem and comprehensive evaluation of fifteen cities under the jurisdiction of Xinjiang by comparing the values, analyzing the difference between the Xinjiang tourism investment environments, offer the decision basis for investors.%对新疆旅游业投资环境进行理论分析的基础上,从旅游业市场环境、自然环境、经济环境、社会文化服务环境和基础设施环境五个方面选取26个指标建立评价指标体系,运用均方差决策法对所搜集的数据进行标准化处理和数据计算,最终分析和评价新疆旅游业投资环境。通过对比新疆所辖十五个地州市的综合评价值和各子系统的评价值,对新疆旅游业投资环境进行差异性分析,为投资者提供决策依据。

  1. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  2. Patient population management: taking the leap from variance analysis to outcomes measurement.

    Science.gov (United States)

    Allen, K M

    1998-01-01

    Case managers today at BCHS have a somewhat different role than at the onset of the Collaborative Practice Model. They are seen throughout the organization as: Leaders/participants on cross-functional teams. Systems change agents. Integrating/merging with quality services and utilization management. Outcomes managers. One of the major cross-functional teams is in the process of designing a Care Coordinator role. These individuals will, as one of their functions, assume responsibility for daily patient care management activities. A variance tracking program has come into the Utilization Management (UM) department as part of a software package purchased to automate UM work activities. This variance program could potentially be used by the new care coordinators as the role develops. The case managers are beginning to use a Decision Support software, (Transition Systems Inc.) in the collection of data that is based on a cost accounting system and linked to clinical events. Other clinical outcomes data bases are now being used by the case manager to help with the collection and measurement of outcomes information. Hoshin planning will continue to be a framework for defining and setting the targets for clinical and financial improvements throughout the organization. Case managers will continue to be involved in many of these system-wide initiatives. In the words of Galileo, 1579, "You need to count what's countable, measure what's measurable, and what's not measurable, make measurable."

  3. Inferring changes in ENSO amplitude from the variance of proxy records

    OpenAIRE

    Russon, Tom; Tudhope, Alexander; Collins, Mat; Hegerl, Gabi

    2015-01-01

    One common approach to investigating past changes in ENSO amplitude is through quantifying the variance of ENSO-influenced proxy records. However, a component of the variance of all such proxies will reflect influences that are unrelated to the instrumental climatic indices from which modern ENSO amplitudes are defined. The unrelated component of proxy variance introduces a fundamental source of uncertainty to all such constraints on past ENSO amplitudes. Based on a simple parametric approach...

  4. Performance of medical students admitted via regular and admission-variance routes.

    Science.gov (United States)

    Simon, H J; Covell, J W

    1975-03-01

    Twenty-three medical students from socioeconomically disadvantaged backgrounds and drawn chiefly from Chicano and black racial minority groups were granted admission variances to the University of California, San Diego, School of Medicine in 1970 and 1971. This group was compared with 21 regularly admitted junior and senoir medical students with respect to: specific admissions criteria (Medical College Admission Test scores, grade-point average, and college rating score); scores, on Part I of the examinations of the National Board of Medical Examiners (NBME); and performance in at least two of the medicine, surgery, and pediatrics clerkships. The two populations differed markedly on admission. The usual screen would have precluded admission of all but one of the students granted variances. At the end of the second year, average NBME Part I scores again identified two distinct populations, but the average scores of both groups were clearly above the minimum passing level. The groups still differ on analysis of their aggregate performances on the clinical services, but the difference following completion of two of three major clinical clerkships has become the distinction between a "slightly above average" level of performance for the regularly admitted students and an "average" level for students admitted on variances.

  5. How the Weak Variance of Momentum Can Turn Out to be Negative

    OpenAIRE

    2015-01-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wign...

  6. Effects of noise variance model on optimal feedback design and actuator placement

    Science.gov (United States)

    Ruan, Mifang; Choudhury, Ajit K.

    1994-01-01

    In optimal placement of actuators for stochastic systems, it is commonly assumed that the actuator noise variances are not related to the feedback matrix and the actuator locations. In this paper, we will discuss the limitation of that assumption and develop a more practical noise variance model. Various properties associated with optimal actuator placement under the assumption of this noise variance model are discovered through the analytical study of a second order system.

  7. The Multi-allelic Genetic Architecture of a Variance-Heterogeneity Locus for Molybdenum Concentration in Leaves Acts as a Source of Unexplained Additive Genetic Variance.

    Directory of Open Access Journals (Sweden)

    Simon K G Forsberg

    2015-11-01

    Full Text Available Genome-wide association (GWA analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or "missing heritability". Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1, and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975 as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations.

  8. The pricing of long and short run variance and correlation risk in stock returns

    OpenAIRE

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk is priced in the cross-section because shocks to average stock volatility and correlation are priced. Both long and short run volatility and correlation factors have explanatory power for returns....

  9. Models of Postural Control: Shared Variance in Joint and COM Motions.

    Science.gov (United States)

    Kilby, Melissa C; Molenaar, Peter C M; Newell, Karl M

    2015-01-01

    This paper investigated the organization of the postural control system in human upright stance. To this aim the shared variance between joint and 3D total body center of mass (COM) motions was analyzed using multivariate canonical correlation analysis (CCA). The CCA was performed as a function of established models of postural control that varied in their joint degrees of freedom (DOF), namely, an inverted pendulum ankle model (2DOF), ankle-hip model (4DOF), ankle-knee-hip model (5DOF), and ankle-knee-hip-neck model (7DOF). Healthy young adults performed various postural tasks (two-leg and one-leg quiet stances, voluntary AP and ML sway) on a foam and rigid surface of support. Based on CCA model selection procedures, the amount of shared variance between joint and 3D COM motions and the cross-loading patterns we provide direct evidence of the contribution of multi-DOF postural control mechanisms to human balance. The direct model fitting of CCA showed that incrementing the DOFs in the model through to 7DOF was associated with progressively enhanced shared variance with COM motion. In the 7DOF model, the first canonical function revealed more active involvement of all joints during more challenging one leg stances and dynamic posture tasks. Furthermore, the shared variance was enhanced during the dynamic posture conditions, consistent with a reduction of dimension. This set of outcomes shows directly the degeneracy of multivariate joint regulation in postural control that is influenced by stance and surface of support conditions.

  10. Variance in population firing rate as a measure of slow time-scale correlation

    Directory of Open Access Journals (Sweden)

    Adam C. Snyder

    2013-12-01

    Full Text Available Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research.

  11. Alternatives to F-Test in One Way ANOVA in case of heterogeneity of variances (a simulation study

    Directory of Open Access Journals (Sweden)

    Karl Moder

    2010-12-01

    Full Text Available Several articles deal with the effects of inhomogeneous variances in one way analysis of variance (ANOVA. A very early investigation of this topic was done by Box (1954. He supposed, that in balanced designs with moderate heterogeneity of variances deviations of the empirical type I error rate (on experiments based realized α to the nominal one (predefined α for H0 are small. Similar conclusions are drawn by Wellek (2003. For not so moderate heterogeneity (e.g. σ1:σ2:...=3:1:... Moder (2007 showed, that empirical type I error rate is far beyond the nominal one, even with balanced designs. In unbalanced designs the difficulties get bigger. Several attempts were made to get over this problem. One proposal is to use a more stringent α level (e.g. 2.5% instead of 5% (Keppel & Wickens, 2004. Another recommended remedy is to transform the original scores by square root, log, and other variance reducing functions (Keppel & Wickens, 2004, Heiberger & Holland, 2004. Some authors suggest the use of rank based alternatives to F-test in analysis of variance (Vargha & Delaney, 1998. Only a few articles deal with two or multifactorial designs. There is some evidence, that in a two or multi-factorial design type I error rate is approximately met if the number of factor levels tends to infinity for a certain factor while the number of levels is fixed for the other factors (Akritas & S., 2000, Bathke, 2004.The goal of this article is to find an appropriate location test in an oneway analysis of variance situation with inhomogeneous variances for balanced and unbalanced designs based on a simulation study.

  12. Algorithm of Text Vectors Feature Mining Based on Multi Factor Analysis of Variance%基于多因素方差分析的文本向量特征挖掘算法

    Institute of Scientific and Technical Information of China (English)

    谭海中; 何波

    2015-01-01

    The text feature vector mining applied to information resources organization and management field, in the field of data mining and has great application value, characteristic vector of traditional text mining algorithm using K-means algo⁃rithm , the accuracy is not good. A new method based on multi factor variance analysis of the characteristics of mining algo⁃rithm of text vector. The features used multi factor variance analysis method to obtain a variety of corpora mining rules, based on ant colony algorithm, based on ant colony fitness probability regular training transfer rule, get the evolution of pop⁃ulation of recent data sets obtained effective moment features the maximum probability, the algorithm selects K-means ini⁃tial clustering center based on optimized division, first division of the sample data, then according to the sample distribu⁃tion characteristics to determine the initial cluster center, improve the performance of text feature mining, the simulation re⁃sults show that, this algorithm improves the clustering effect of the text feature vectors, and then improve the performance of feature mining, data feature has higher recall rate and detection rate, time consuming less, greater in the application of data mining in areas such as value.%文本向量特征挖掘应用于信息资源组织和管理领域,在大数据挖掘领域具有较大应用价值,传统算法精度不好。提出一种基于多因素方差分析的文本向量特征挖掘算法。使用多因素方差分析方法得到多种语料库的特征挖掘规律,结合蚁群算法,根据蚁群适应度概率正则训练迁移法则,得到种群进化最近时刻获得的数据集有效特征概率最大值,基于最优划分的K-means初始聚类中心选取算法,先对数据样本进行划分,然后根据样本分布特点来确定初始聚类中心,提高文本特征挖掘性能。仿真结果表明,该算法提高了文本向量特征的聚类效

  13. Additivity of variance Gamma distribution and its application in financial analysis%方差伽玛分布的可加性及其在金融分析中的应用

    Institute of Scientific and Technical Information of China (English)

    刘懿祺; 唐子健; 唐亚勇

    2013-01-01

    There is an important property in today’s financial analysis ,which is that data follow a distri-bution in a longer term .Along with other topical characteristics :asymmetry ,excess kurtosis and vola-tility clustering ,the property plays an important role .T his paper investigates the additivity of variance Gamma distribution and its application in analyzing data ,and proves the property above .Bayesian meth-od is employed for the estimation of the parameters of the distribution .T he result from Bayesian analy-sis is compared with ghyp package in software R .Extensive convergence diagnostics for the chain ob-tained from BUGS are performed by using the CODA software .%在现代金融数据分析中,金融收益率数据的一个重要性质---在较长时间服从同一类分布---与收益率数据的其他典型特征:非对称性,尖峰厚尾,波动群聚性等有着同样的重要性。本文研究了方差伽玛分布的可加性及其在金融数据分析上的应用,并运用其验证了金融数据收益率数据满足上面的性质。作者采用贝叶斯方法对分布的参数进行了估计,并与R软件中的ghyp程序包的结果进行了比较,最后也对贝叶斯方法产生的Markov链的收敛性运用CODA软件进行了诊断。

  14. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework fo

  15. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2017-01-01

    The discounted stock price under the Constant Elasticity of Variance model is not a martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the price for which put-call parity holds and the price that represents the lowest cost of replic

  16. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    Science.gov (United States)

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  17. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2014-01-01

    The discounted stock price under the Constant Elasticity of Variance (CEV) model is a strict local martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the risk-neutral call price and an alternative price that is linked to the unique

  18. Deflation as a Method of Variance Reduction for Estimating the Trace of a Matrix Inverse

    CERN Document Server

    Gambhir, Arjun Singh; Orginos, Kostas

    2016-01-01

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors are random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can b...

  19. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  20. Exploring Hydrological Flow Paths in Conceptual Catchment Models using Variance-based Sensitivity Analysis

    Science.gov (United States)

    Mockler, E. M.; O'Loughlin, F.; Bruen, M. P.

    2013-12-01

    Conceptual rainfall runoff (CRR) models aim to capture the dominant hydrological processes in a catchment in order to predict the flows in a river. Most flood forecasting models focus on predicting total outflows from a catchment and often perform well without the correct distribution between individual pathways. However, modelling of water flow paths within a catchment, rather than its overall response, is specifically needed to investigate the physical and chemical transport of matter through the various elements of the hydrological cycle. Focus is increasingly turning to accurately quantifying the internal movement of water within these models to investigate if the simulated processes contributing to the total flows are realistic in the expectation of generating more robust models. Parameter regionalisation is required if such models are to be widely used, particularly in ungauged catchments. However, most regionalisation studies to date have typically consisted of calibrations and correlations of parameters with catchment characteristics, or some variations of this. In order for a priori parameter estimation in this manner to be possible, a model must be parametrically parsimonious while still capturing the dominant processes of the catchment. The presence of parameter interactions within most CRR model structures can make parameter prediction in ungauged basins very difficult, as the functional role of the parameter within the model may not be uniquely identifiable. We use a variance based sensitivity analysis method to investigate parameter sensitivities and interactions in the global parameter space of three CRR models, simulating a set of 30 Irish catchments within a variety of hydrological settings over a 16 year period. The exploration of sensitivities of internal flow path partitioning was a specific focus and correlations between catchment characteristics and parameter sensitivities were also investigated to assist in evaluating model performances

  1. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik;

    2014-01-01

    Stochastic linear systems arise in a large number of control applications. This paper presents a mean-variance criterion for economic model predictive control (EMPC) of such systems. The system operating cost and its variance is approximated based on a Monte-Carlo approach. Using convex relaxation...

  2. Impact of time-inhomogeneous jumps and leverage type effects on returns and realised variances

    DEFF Research Database (Denmark)

    Veraart, Almut

    This paper studies the effect of time-inhomogeneous jumps and leverage type effects on realised variance calculations when the logarithmic asset price is given by a Lévy-driven stochastic volatility model. In such a model, the realised variance is an inconsistent estimator of the integrated...

  3. A FORTRAN program for computing the exact variance of weighted kappa.

    Science.gov (United States)

    Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E

    2005-10-01

    An algorithm and associated FORTRAN program are provided for the exact variance of weighted kappa. Program VARKAP provides the weighted kappa test statistic, the exact variance of weighted kappa, a Z score, one-sided lower- and upper-tail N(0,1) probability values, and the two-tail N(0,1) probability value.

  4. How the Weak Variance of Momentum Can Turn Out to be Negative

    CERN Document Server

    Feyereisen, M R

    2015-01-01

    Weak values are average quantities,therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from...

  5. Robustness of Kriging when interpolating in random simulation with heterogeneous variances: some experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Beers, van W.C.M.

    2005-01-01

    This paper investigates the use of Kriging in random simulation when the simulation output variances are not constant. Kriging gives a response surface or metamodel that can be used for interpolation. Because Ordinary Kriging assumes constant variances, this paper also applies Detrended Kriging to e

  6. Diffusion tensor imaging-derived measures of fractional anisotropy across the pyramidal tract are influenced by the cerebral hemisphere but not by gender in young healthy volunteers: a split-plot factorial analysis of variance

    Institute of Scientific and Technical Information of China (English)

    Ernesto Roldan-Valadez; Edgar Rios-Piedra; Rafael Favila; Sarael Alcauter; Camilo Rios

    2012-01-01

    Background Diffusion tensor imaging (DTI) permits quantitative examination within the pyramidal tract (PT) by measuring fractional anisotropy (FA).To the best of our knowledge,the inter-variability measures of FA along the PT remain unexplained.A clear understanding of these reference values would help radiologists and neuroscientists to understand normality as well as to detect early pathophysiologic changes of brain diseases.The aim of our study was to calculate the variability of the FA at eleven anatomical landmarks along the PT and the influences of gender and cerebral hemisphere in these measurements in a sample of young,healthy volunteers.Methods A retrospective,cross-sectional study was performed in twenty-three right-handed healthy volunteers who underwent magnetic resonance evaluation of the brain.Mean FA values from eleven anatomical landmarks across the PT (at centrum semiovale,corona radiata,posterior limb of internal capsule (PLIC),mesencephalon,pons,and medulla oblongata) were evaluated using split-plot factorial analysis of variance (ANOVA).Results We found a significant interaction effect between anatomical landmark and cerebral hemisphere (F (10,32)=4.516,P=0.001; Wilks' Lambda 0.415,with a large effect size (partial n2=0.585)).The influence of gender end age was non-significant.On average,the midbrain and PLIC FA values were higher than pons and medulla oblongata values; centrum semiovale measurements were higher than those of the corona radiata but lower than PLIC.Conclusions There is a normal variability of FA measurements along PT in healthy individuals,which is influenced by regions of interest location (anatomical landmarks) and cerebral hemisphere.FA measurements should be reported for comparing same-side and same-landmark PT to help avoid comparisons with the contralateral PT; ideally,normative values should exist for a clinically significant age group.A standardized package of selected DTI processing tools would allow DTI processing to be

  7. Previous estimates of mitochondrial DNA mutation level variance did not account for sampling error: comparing the mtDNA genetic bottleneck in mice and humans.

    Science.gov (United States)

    Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C

    2010-04-09

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.

  8. Effect of toxicity of Ag nanoparticles on SERS spectral variance of bacteria.

    Science.gov (United States)

    Cui, Li; Chen, Shaode; Zhang, Kaisong

    2015-02-25

    Ag nanoparticles (NPs) have been extensively utilized in surface-enhanced Raman scattering (SERS) spectroscopy for bacterial identification. However, Ag NPs are toxic to bacteria. Whether such toxicity can affect SERS features of bacteria and interfere with bacterial identification is still unknown and needed to explore. Here, by carrying out a comparative study on non-toxic Au NPs with that on toxic Ag NPs, we investigated the influence of nanoparticle concentration and incubation time on bacterial SERS spectral variance, both of which were demonstrated to be closely related to the toxicity of Ag NPs. Sensitive spectral alterations were observed on Ag NPs with increase of NPs concentration or incubation time, accompanied with an obvious decrease in number of viable bacteria. In contrast, SERS spectra and viable bacterial number on Au NPs were rather constant under the same conditions. A further analysis on spectral changes demonstrated that it was cell response (i.e. metabolic activity or death) to the toxicity of Ag NPs causing spectral variance. However, biochemical responses to the toxicity of Ag were very different in different bacteria, indicating the complex toxic mechanism of Ag NPs. Ag NPs are toxic to a great variety of organisms, including bacteria, fungi, algae, protozoa etc., therefore, this work will be helpful in guiding the future application of SERS technique in various complex biological systems.

  9. Quantitative measurement of speech sound distortions with the aid of minimum variance spectral estimation method for dentistry use.

    Science.gov (United States)

    Bereteu, L; Drăgănescu, G E; Stănescu, D; Sinescu, C

    2011-12-01

    In this paper, we search an adequate quantitative method based on minimum variance spectral analysis in order to reflect the dependence of the speech quality on the correct positioning of the dental prostheses. We also search some quantitative parameters, which reflect the correct position of dental prostheses in a sensitive manner.

  10. Numerical errors in the computation of subfilter scalar variance in large eddy simulations

    Science.gov (United States)

    Kaul, C. M.; Raman, V.; Balarac, G.; Pitsch, H.

    2009-05-01

    Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with the numerical errors due to their implementation using finite-difference methods. A priori tests on data from direct numerical simulation of homogeneous turbulence are performed to evaluate the numerical implications of specific model forms. Like other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite-difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues in the variance transport equation stem from discrete approximations to chain-rule manipulations used to derive convection, diffusion, and production terms associated with the square of the filtered scalar. These approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority.

  11. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    Science.gov (United States)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  12. The impact of news ans the SMP on realized (co)variances in the Eurozone sovereign debt market

    NARCIS (Netherlands)

    R. Beetsma; F. de Jong; M. Giuliodori; D. Widijanto

    2014-01-01

    We use realized variances and covariances based on intraday data from Eurozone sovereign bond market to measure the dependence structure of eurozone sovereign yields. Our analysis focuses on the impact of news, obtained from the Eurointelligence newsash, on the dependence structure. More news raises

  13. Performance Analysis of Minimiz ation Variance Control on Stochastic Systems with Unknown Model%具有未知模型的随机系统最小方差控制性能分析磁

    Institute of Scientific and Technical Information of China (English)

    高韵; 杨恒占; 钱富才

    2016-01-01

    The variance minimization control with the unknown parameters for stochastic systems is studied from the point of system identification .Firstly the unknown parameters are identified with recursive least squares approach to make the system with known parameters .Secondly ,a controller is designed with the variance minimization control .Finally ,Mat‐lab software is used to simulate the given example and the result shows that the method is simple and feasible .%针对参数未知的随机系统,从系统辨识的角度研究最小方差控制问题。首先用最小二乘法辨识参数,继而系统成为一个参数已知的系统。其次,再用最小方差方法设计出控制器。最后给出算例并运用M atlab软件进行仿真,结果表明此方法简单、可行。

  14. 动态Allan方差改进算法及其在FOG启动信号分析中的应用%Improvement algorithm of dynamic Allan variance and its application in analysis of FOG start-up signal

    Institute of Scientific and Technical Information of China (English)

    汪立新; 朱战辉; 李瑞

    2016-01-01

    The classical dynamic Allan variance(DAVAR) can describe the non-stationary of random error of fiber optical gyroscope(FOG) effectively. However, the method has defects such as poor confidence on the estimation of long-term -values due to the reduced amount of data captured by the fixed length windows. Besides, the method is difficult to make a satisfactory tradeoff between dynamic tracking capabilities and variance reduction. An improved DAVAR algorithm based on kurtosis and data extension was proposed to solve the problems. Firstly, the kurtosis of data inside the windows was introduced as characterization of signal's instantaneous non-stationary, and the window length function which was utilized to truncate the signal was built by taken kurtosis as variables, the function can make window length change with the non-stationary of the signal automatically. Secondly, the random error of FOG was truncated with the function. Then the data in the windows were extended by the total variance method to improve the confidence. At last the Allan variance of extended data was computed and arranged by three-dimensional. The measured data of FOG start-up signal was analyzed with the proposed algorithm and DAVAR. The results show that the proposed algorithm is an effective way to characterize non-stationary of FOG and can also obtain a lower estimation error at long-term -values.%针对动态Allan方差运用固定长度的分析窗截取信号导致样本数据量减少,长相关时间下方差估计值置信度降低,首先,针对动态信号跟踪能力与置信度的提高不能兼顾的问题提出了一种改进算法。引入截断窗内峭度值作为表征信号短时稳定度的参数,并建立以峭度为变量的窗宽函数,该函数可以使截断窗长随着信号的平稳程度自动变化。其次,再用窗宽自适应的滑动窗分段截取陀螺随机误差,分别对每个截断窗内样本进行总方差计算以增加方差估计的自由度

  15. Application of an area of review variance methodology to the San Juan Basin, New Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Dunn-Norman, S.; Warner, D.L.; Koederitz, L.F.; Laudon, R.C.

    1995-12-01

    When the Underground Injection Control (UIC) Regulations were promulgated in 1980, existing Class II Injection wells operating at the time were excluded from Area of Review (AOR) requirements. EPA has expressed its intent to revise the regulations to include the requirement for AOR`s for such wells, but it is expected that oil and gas producing states will be allowed to adopt a variance strategy for these wells. An AOR variance methodology has been developed under sponsorship of the American Petroleum Institute. The general concept of the variance methodology is a systematic evaluation of basic variance criteria that were agreed to by a Federal Advisory Committee. These criteria include absence of USDWs, lack of positive flow potential from the petroleum reservoir into the overlying USDWs, mitigating geological factors, and other evidence. The AOR variance methodology has been applied to oilfields in the San Juan Basin, New Mexico. This paper details results of these analyses, particularly with respect to the opportunity for variance for injection fields in the San Juan Basin.

  16. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Directory of Open Access Journals (Sweden)

    Frank M. You

    2016-04-01

    Full Text Available The type 2 modified augmented design (MAD2 is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html.

  17. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Institute of Scientific and Technical Information of China (English)

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).

  18. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Institute of Scientific and Technical Information of China (English)

    Frank M.You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design(MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters.Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline(http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).

  19. Quantum mechanical expansion of variance of a particle in a weakly non-uniform electric and magnetic field

    Science.gov (United States)

    Chan, Poh Kam; Oikawa, Shun-ichi; Kosaka, Wataru

    2016-08-01

    We have solved the Heisenberg equation of motion for the time evolution of the position and momentum operators for a non-relativistic spinless charged particle in the presence of a weakly non-uniform electric and magnetic field. It is shown that the drift velocity operator obtained in this study agrees with the classical counterpart, and that, using the time dependent operators, the variances in position and momentum grow with time. The expansion rate of variance in position and momentum are dependent on the magnetic gradient scale length, however, independent of the electric gradient scale length. In the presence of a weakly non-uniform electric and magnetic field, the theoretical expansion rates of variance expansion are in good agreement with the numerical analysis. It is analytically shown that the variance in position reaches the square of the interparticle separation, which is the characteristic time much shorter than the proton collision time of plasma fusion. After this time, the wavefunctions of the neighboring particles would overlap, as a result, the conventional classical analysis may lose its validity. The broad distribution of individual particle in space means that their Coulomb interactions with other particles become weaker than that expected in classical mechanics.

  20. Fractal fluctuations and quantum-like chaos in the brain by analysis of variability of brain waves: A new method based on a fractal variance function and random matrix theory: A link with El Naschie fractal Cantorian space-time and V. Weiss and H. Weiss golden ratio in brain

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari (Italy); School of Advanced International Studies on Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: elio.conte@fastwebnet.it; Khrennikov, Andrei [International Center for Mathematical Modelling in Physics and Cognitive Sciences, M.S.I., University of Vaexjoe, S-35195 (Sweden); Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

    2009-09-15

    We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.

  1. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Science.gov (United States)

    Kos, Marko; Grašič, Matej; Kačič, Zdravko

    2009-12-01

    This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE). The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  2. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Directory of Open Access Journals (Sweden)

    Zdravko Kačič

    2009-01-01

    Full Text Available This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE. The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  3. Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.

    Science.gov (United States)

    He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne

    2016-04-01

    Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more

  4. Inference of bioequivalence for log-normal distributed data with unspecified variances.

    Science.gov (United States)

    Xu, Siyan; Hua, Steven Y; Menton, Ronald; Barker, Kerry; Menon, Sandeep; D'Agostino, Ralph B

    2014-07-30

    Two drugs are bioequivalent if the ratio of a pharmacokinetic (PK) parameter of two products falls within equivalence margins. The distribution of PK parameters is often assumed to be log-normal, therefore bioequivalence (BE) is usually assessed on the difference of logarithmically transformed PK parameters (δ). In the presence of unspecified variances, test procedures such as two one-sided tests (TOST) use sample estimates for those variances; Bayesian models integrate them out in the posterior distribution. These methods limit our knowledge on the extent that inference about BE is affected by the variability of PK parameters. In this paper, we propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for δ. Demonstrated with published real-life data, the proposed method not only produces results that are same as TOST and comparable with Bayesian method but also helps identify ranges of variances, which could make the determination of BE more achievable. Our findings manifest the advantages of the proposed method in making inference about the extent that BE is affected by the unspecified variances, which cannot be accomplished either by TOST or Bayesian method.

  5. The Correct Kriging Variance Estimated by Bootstrapping

    NARCIS (Netherlands)

    den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.

    2004-01-01

    The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrappi

  6. Formulas for precisely and efficiently estimating the bias and variance of the length measurements

    Science.gov (United States)

    Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin

    2016-10-01

    Error analysis in length measurements is an important problem in geographic information system and cartographic operations. The distance between two random points—i.e., the length of a random line segment—may be viewed as a nonlinear mapping of the coordinates of the two points. In real-world applications, an unbiased length statistic may be expected in high-precision contexts, but the variance of the unbiased statistic is of concern in assessing the quality. This paper suggesting the use of a k-order bias correction formula and a nonlinear error propagation approach to the distance equation provides a useful way to describe the length of a line. The study shows that the bias is determined by the relative precision of the random line segment, and that the use of the higher-order bias correction is only needed for short-distance applications.

  7. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  8. Statistical modelling of tropical cyclone tracks: a comparison of models for the variance of trajectories

    CERN Document Server

    Hall, T; Hall, Tim; Jewson, Stephen

    2005-01-01

    We describe results from the second stage of a project to build a statistical model for hurricane tracks. In the first stage we modelled the unconditional mean track. We now attempt to model the unconditional variance of fluctuations around the mean. The variance models we describe use a semi-parametric nearest neighbours approach in which the optimal averaging length-scale is estimated using a jack-knife out-of-sample fitting procedure. We test three different models. These models consider the variance structure of the deviations from the unconditional mean track to be isotropic, anisotropic but uncorrelated, and anisotropic and correlated, respectively. The results show that, of these models, the anisotropic correlated model gives the best predictions of the distribution of future positions of hurricanes.

  9. Detection of rheumatoid arthritis by evaluation of normalized variances of fluorescence time correlation functions

    Science.gov (United States)

    Dziekan, Thomas; Weissbach, Carmen; Voigt, Jan; Ebert, Bernd; MacDonald, Rainer; Bahner, Malte L.; Mahler, Marianne; Schirner, Michael; Berliner, Michael; Berliner, Birgitt; Osel, Jens; Osel, Ilka

    2011-07-01

    Fluorescence imaging using the dye indocyanine green as a contrast agent was investigated in a prospective clinical study for the detection of rheumatoid arthritis. Normalized variances of correlated time series of fluorescence intensities describing the bolus kinetics of the contrast agent in certain regions of interest were analyzed to differentiate healthy from inflamed finger joints. These values are determined using a robust, parameter-free algorithm. We found that the normalized variance of correlation functions improves the differentiation between healthy joints of volunteers and joints with rheumatoid arthritis of patients by about 10% compared to, e.g., ratios of areas under the curves of raw data.

  10. Optimal Investment and Consumption Decisions under the Constant Elasticity of Variance Model

    Directory of Open Access Journals (Sweden)

    Hao Chang

    2013-01-01

    Full Text Available We consider an investment and consumption problem under the constant elasticity of variance (CEV model, which is an extension of the original Merton’s problem. In the proposed model, stock price dynamics is assumed to follow a CEV model and our goal is to maximize the expected discounted utility of consumption and terminal wealth. Firstly, we apply dynamic programming principle to obtain the Hamilton-Jacobi-Bellman (HJB equation for the value function. Secondly, we choose power utility and logarithm utility for our analysis and apply variable change technique to obtain the closed-form solutions to the optimal investment and consumption strategies. Finally, we provide a numerical example to illustrate the effect of market parameters on the optimal investment and consumption strategies.

  11. Nonparametric Estimation of Mean and Variance and Pricing of Securities Nonparametric Estimation of Mean and Variance and Pricing of Sec

    Directory of Open Access Journals (Sweden)

    Akhtar R. Siddique

    2000-03-01

    Full Text Available This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices. This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices.

  12. High dimensional matrix estimation with unknown variance of the noise

    CERN Document Server

    Klopp, Olga

    2011-01-01

    We propose a new pivotal method for estimating high-dimensional matrices. Assume that we observe a small set of entries or linear combinations of entries of an unknown matrix $A_0$ corrupted by noise. We propose a new method for estimating $A_0$ which does not rely on the knowledge or an estimation of the standard deviation of the noise $\\sigma$. Our estimator achieves, up to a logarithmic factor, optimal rates of convergence under the Frobenius risk and, thus, has the same prediction performance as previously proposed estimators which rely on the knowledge of $\\sigma$. Our method is based on the solution of a convex optimization problem which makes it computationally attractive.

  13. Genetic variance of sunflower yield components - Heliantus annuus L.

    OpenAIRE

    Hladni Nada; Škorić Dragan; Kraljević-Balalić Marija

    2003-01-01

    The main goals of sunflower breeding in Yugoslavia and abroad are increased seed yield and oil content per unit area and increased resistance to diseases, insects and stress conditions via an optimization of plant architecture. In order to determine the mode of inheritance, gene effects and correlations of total leaf number per plant, total leaf area and plant height, six genetically divergent inbred lines of sunflower were subjected to half diallel crosses. Significant differences in mean va...

  14. Genetic variance of sunflower yield components - Heliantus annuus L.

    Directory of Open Access Journals (Sweden)

    Hladni Nada

    2003-01-01

    Full Text Available The main goals of sunflower breeding in Yugoslavia and abroad are increased seed yield and oil content per unit area and increased resistance to diseases, insects and stress conditions via an optimization of plant architecture. In order to determine the mode of inheritance, gene effects and correlations of total leaf number per plant, total leaf area and plant height, six genetically divergent inbred lines of sunflower were subjected to half diallel crosses. Significant differences in mean values of all the traits were found in the F1 and F2 generations. Additive gene effects were more important in the inheritance of total leaf number per plant and plant height, while in the case of total leaf area per plant the nonadditive ones were more important looking at all the combinations in the F1 and F2 generations. The average degree of dominance (Hi/D1/2 was lower than one for total leaf number per plant and plant height, so the mode of inheritance was partial dominance, while with total leaf area the value was higher than one, indicating super dominance as the mode of inheritance. Significant positive correlation was found: between total leaf area per plant and total leaf number per plant (0.285* and plant height (0.278*. The results of the study are of importance for further sunflower breeding work.

  15. Application of variance reduction techniques in Monte Carlo simulation of clinical electron linear accelerator

    Science.gov (United States)

    Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.

    2012-01-01

    Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.

  16. Variance and invariance of neuronal long-term representations

    Science.gov (United States)

    Clopath, Claudia; Bonhoeffer, Tobias; Hübener, Mark

    2017-01-01

    The brain extracts behaviourally relevant sensory input to produce appropriate motor output. On the one hand, our constantly changing environment requires this transformation to be plastic. On the other hand, plasticity is thought to be balanced by mechanisms ensuring constancy of neuronal representations in order to achieve stable behavioural performance. Yet, prominent changes in synaptic strength and connectivity also occur during normal sensory experience, indicating a certain degree of constitutive plasticity. This raises the question of how stable neuronal representations are on the population level and also on the single neuron level. Here, we review recent data from longitudinal electrophysiological and optical recordings of single-cell activity that assess the long-term stability of neuronal stimulus selectivities under conditions of constant sensory experience, during learning, and after reversible modification of sensory input. The emerging picture is that neuronal representations are stabilized by behavioural relevance and that the degree of long-term tuning stability and perturbation resistance directly relates to the functional role of the respective neurons, cell types and circuits. Using a ‘toy’ model, we show that stable baseline representations and precise recovery from perturbations in visual cortex could arise from a ‘backbone’ of strong recurrent connectivity between similarly tuned cells together with a small number of ‘anchor’ neurons exempt from plastic changes. This article is part of the themed issue ‘Integrating Hebbian and homeostatic plasticity’. PMID:28093555

  17. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per

    2013-01-01

    of single nucleotide polymorphism (SNP) data and trait phenotypes and can account for a much larger fraction of the heritable component of the trait. A disadvantage is that this “black box” modelling approach does not provide any insight into the biological mechanisms underlying the trait. We propose...... cattle. Research supported by EC-FP7 “Quantomics”, agreement n° 222664...

  18. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per

    of single nucleotide polymorphism (SNP) data and trait phenotypes and can account for a much larger fraction of the heritable component of the trait. A disadvantage is that this “black box” modelling approach does not provide any insight into the biological mechanisms underlying the trait. We propose...... cattle. Research supported by EC-FP7 “Quantomics”, agreement n° 222664...

  19. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, K.; Sørensen, T.I.A.;

    2007-01-01

    Diffusion weighted imaging (DWI) and tractography allow the non-invasive study of anatomical brain connectivity. However, a gold standard for validating tractography of complex connections is lacking. Using the porcine brain as a highly gyrated brain model, we quantitatively and qualitatively...

  20. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per;

    2013-01-01

    that the associated genetic variants are enriched for genes that are connected in biol ogical pathways or for likely functional effects on genes. These biological findings provide valuable insight for developing better genomic models. These are statistical models for predicting complex trait phenotypes on the basis...... of single nucleotide polymorphism (SNP) data and trait phenotypes and can account for a much larger fraction of the heritable component of the trait. A disadvantage is that this “black box” modelling approach does not provide any insight into the biological mechanisms underlying the trait. We propose...... to open the “black box” by building SNP set genomic models that evaluate the collective action of multiple SNPs in genes, biological pathways or other external biological findings on the trait phenotype. As a proof of concept we have tested the modelling framework on susceptibility to mastitis in dairy...

  1. Variance of the Galactic nuclei cosmic ray flux

    CERN Document Server

    Bernard, G; Salati, P; Taillet, R

    2012-01-01

    Measurements of cosmic ray fluxes by the PAMELA and CREAM experiments show unexpected spectral features between 200 GeV and 100 TeV. They could be due to the presence of nearby and young cosmic ray sources. This can be studied in the myriad model, in which cosmic rays diffuse from point-like instantaneous sources located randomly throughout the Galaxy. To test this hypothesis, one must compute the flux due to a catalog of local sources, but also the error bars associated to this quantity. This turns out not to be as straightforward as it seems, as the standard deviation is infinite when computed for the most general statistical ensemble. The goals of this paper are to provide a method to associate error bars to the flux measurements which has a clear statistical meaning, and to explore the relation between the myriad model and the more usual source model based on a continuous distribution. To this end, we show that the quantiles of the flux distribution are well-defined, even though the standard deviation is ...

  2. Mean-Variance Efficiency of the Market Portfolio

    Directory of Open Access Journals (Sweden)

    Rafael Falcão Noda

    2014-06-01

    Full Text Available The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i the efficient frontier intersects with the market portfolio and (ii the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the adjusted parameters are not significantly different from the sample parameters, in line with the results of Levy and Roll (2010 for the USA stock market. Such results suggest that the imprecisions in the implementation of the CAPM stem mostly from parameter estimation errors and that other explanatory factors for returns may have low relevance. Therefore, our results contradict the above-mentioned criticisms to the CAPM in Brazil.

  3. Extraction of slum areas from VHR imagery using GLCM variance

    NARCIS (Netherlands)

    Kuffer, M.; Pfeffer, K.; Sliuzas, R.; Baud, I.S.A.

    2016-01-01

    Many cities in the global South are facing the emergence and growth of highly dynamic slum areas, but often lack detailed information on these developments. Available statistical data are commonly aggregated to large, heterogeneous administrative units that are geographically meaningless for informi

  4. Estimation of bias and variance of measurements made from tomography scans

    Science.gov (United States)

    Bradley, Robert S.

    2016-09-01

    Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.

  5. Lower within-community variance of negative density dependence increases forest diversity.

    Directory of Open Access Journals (Sweden)

    António Miranda

    Full Text Available Local abundance of adult trees impedes growth of conspecific seedlings through host-specific enemies, a mechanism first proposed by Janzen and Connell to explain plant diversity in forests. While several studies suggest the importance of this mechanism, there is still little information of how the variance of negative density dependence (NDD affects diversity of forest communities. With computer simulations, we analyzed the impact of strength and variance of NDD within tree communities on species diversity. We show that stronger NDD leads to higher species diversity. Furthermore, lower range of strengths of NDD within a community increases species richness and decreases variance of species abundances. Our results show that, beyond the average strength of NDD, the variance of NDD is also crucially important to explain species diversity. This can explain the dissimilarity of biodiversity between tropical and temperate forest: highly diverse forests could have lower NDD variance. This report suggests that natural enemies and the variety of the magnitude of their effects can contribute to the maintenance of biodiversity.

  6. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted....... The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...

  7. Investigations of oligonucleotide usage variance within and between prokaryotes

    DEFF Research Database (Denmark)

    Bohlin, J.; Skjerve, E.; Ussery, David

    2008-01-01

    Oligonucleotide usage in archaeal and bacterial genomes can be linked to a number of properties, including codon usage (trinucleotides), DNA base-stacking energy (dinucleotides), and DNA structural conformation (di-to tetranucleotides). We wanted to assess the statistical information potential...... was that prokaryotic chromosomes can be described by hexanucleotide frequencies, suggesting that prokaryotic DNA is predominantly short range correlated, i. e., information in prokaryotic genomes is encoded in short oligonucleotides. Oligonucleotide usage varied more within AT-rich and host-associated genomes than...... in GC-rich and free-living genomes, and this variation was mainly located in non-coding regions. Bias (selectional pressure) in tetranucleotide usage correlated with GC content, and coding regions were more biased than non-coding regions. Non-coding regions were also found to be approximately 5.5% more...

  8. An efficiency comparison of control chart for monitoring process variance: Non-normality case

    Directory of Open Access Journals (Sweden)

    Sangkawanit, R.

    2005-11-01

    Full Text Available The purposes of this research are to investigate the relation between upper control limit and parameters of weighted moving variance linear weight control chart (WMVL, weighted moving variance: exponential weight control chart (WMVE , successive difference cumulative sum control chart (Cusum-SD and current sample mean cumulative sum control chart (Cusum-UM and to compare efficiencies of these control charts for monitoring increases in process variance, exponentially distributed data with unit variance and Student's t distributed data with variance 1.071429 (30 degrees of freedom as the in control process. Incontrol average run lengths (ARL0 of 200, 400 and 800 are considered. Out-of-control average run lengths (ARL1 obtained via simulation 10,000 times are used as a criteria.The main results are as follows: the upper control limit of WMVL has a negative relation with moving span while the upper control limit of WMVE has a negative relation with moving span and a positive relation with exponential weight. Both the upper control limits of Cusum-SD and Cusum-UM have a negative relation with reference value in which such relation looks like an exponential curve.The results of efficiency comparisons in case of exponentially distributed data for ARL0 of 200, 400 and 800 turned out to be quite similar. When standard deviation changes less than 50%, Cusum-SD control chart and Cusum-UM control chart have ARL1 less than those of WMVL control chart and WMVE control chart. However, when standard deviation changes more than 50%, WMVL control chart and WMVE control chart have ARL1 less than those of Cusum-SD control chart and Cusum-UM control chart. The results are different from the normally distributed data case, studied by Sparks in 2003. In case of Student's t distributed data for ARL0 of 200 and 400 when process variance shifts by a small amount (less than 50%, Cusum- UM control chart has the lowest ARL1 but when process variance shifts by a large amount

  9. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per;

    2012-01-01

    traits such as mammary disease traits in dairy cattle. METHODS: Data on progeny means of six traits related to mastitis resistance in dairy cattle (general mastitis resistance and five pathogen-specific mastitis resistance traits) were analyzed using a bivariate Bayesian SNP-based genomic model......)variances of mastitis resistance traits in dairy cattle using multivariate genomic models......., per chromosome, and in regions of 100 SNP on a chromosome. RESULTS: Genomic proportions of the total variance differed between traits. Genomic correlations were lower than pedigree-based genetic correlations and they were highest between general mastitis and pathogen-specific traits because...

  10. MicroRNA buffering and altered variance of gene expression in response to Salmonella infection.

    Science.gov (United States)

    Bao, Hua; Kommadath, Arun; Plastow, Graham S; Tuggle, Christopher K; Guan, Le Luo; Stothard, Paul

    2014-01-01

    One potential role of miRNAs is to buffer variation in gene expression, although conflicting results have been reported. To investigate the buffering role of miRNAs in response to Salmonella infection in pigs, we sequenced miRNA and mRNA in whole blood from 15 pig samples before and after Salmonella challenge. By analyzing inter-individual variation in gene expression patterns, we found that for moderately and lowly expressed genes, putative miRNA targets showed significantly lower expression variance compared with non-miRNA-targets. Expression variance between highly expressed miRNA targets and non-miRNA-targets was not significantly different. Further, miRNA targets demonstrated significantly reduced variance after challenge whereas non-miRNA-targets did not. RNA binding proteins (RBPs) are significantly enriched among the miRNA targets with dramatically reduced variance of expression after Salmonella challenge. Moreover, we found evidence that targets of young (less-conserved) miRNAs showed lower expression variance compared with targets of old (evolutionarily conserved) miRNAs. These findings point to the importance of a buffering effect of miRNAs for relatively lowly expressed genes, and suggest that the reduced expression variation of RBPs may play an important role in response to Salmonella infection.

  11. MicroRNA buffering and altered variance of gene expression in response to Salmonella infection.

    Directory of Open Access Journals (Sweden)

    Hua Bao

    Full Text Available One potential role of miRNAs is to buffer variation in gene expression, although conflicting results have been reported. To investigate the buffering role of miRNAs in response to Salmonella infection in pigs, we sequenced miRNA and mRNA in whole blood from 15 pig samples before and after Salmonella challenge. By analyzing inter-individual variation in gene expression patterns, we found that for moderately and lowly expressed genes, putative miRNA targets showed significantly lower expression variance compared with non-miRNA-targets. Expression variance between highly expressed miRNA targets and non-miRNA-targets was not significantly different. Further, miRNA targets demonstrated significantly reduced variance after challenge whereas non-miRNA-targets did not. RNA binding proteins (RBPs are significantly enriched among the miRNA targets with dramatically reduced variance of expression after Salmonella challenge. Moreover, we found evidence that targets of young (less-conserved miRNAs showed lower expression variance compared with targets of old (evolutionarily conserved miRNAs. These findings point to the importance of a buffering effect of miRNAs for relatively lowly expressed genes, and suggest that the reduced expression variation of RBPs may play an important role in response to Salmonella infection.

  12. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  13. Discrete velocity computations with stochastic variance reduction of the Boltzmann equation for gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, Peter; Varghese, Philip; Goldstein, David [ASE-EM Department, UT Austin, 210 East 24th St, C0600, Austin, TX 78712 (United States)

    2014-12-09

    We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.

  14. Quantitative milk genomics: estimation of variance components and prediction of fatty acids in bovine milk

    DEFF Research Database (Denmark)

    Krag, Kristian

    The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...

  15. The ALHAMBRA survey: Estimation of the clustering signal encoded in the cosmic variance

    Science.gov (United States)

    López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Arnalte-Mur, P.; Varela, J.; Viironen, K.; Fernández-Soto, A.; Martínez, V. J.; Alfaro, E.; Ascaso, B.; del Olmo, A.; Díaz-García, L. A.; Hurtado-Gil, Ll.; Moles, M.; Molino, A.; Perea, J.; Pović, M.; Aguerri, J. A. L.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; González Delgado, R. M.; Husillos, C.; Infante, L.; Márquez, I.; Masegosa, J.; Prada, F.; Quintana, J. M.

    2015-10-01

    Aims: The relative cosmic variance (σv) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the σv measured in the ALHAMBRA survey. Methods: We measure the cosmic variance of several galaxy populations selected with B-band luminosity at 0.35 ≤ zCSIC).

  16. Evolution of Robustness and Plasticity under Environmental Fluctuation: Formulation in Terms of Phenotypic Variances

    Science.gov (United States)

    Kaneko, Kunihiko

    2012-09-01

    The characterization of plasticity, robustness, and evolvability, an important issue in biology, is studied in terms of phenotypic fluctuations. By numerically evolving gene regulatory networks, the proportionality between the phenotypic variances of epigenetic and genetic origins is confirmed. The former is given by the variance of the phenotypic fluctuation due to noise in the developmental process; and the latter, by the variance of the phenotypic fluctuation due to genetic mutation. The relationship suggests a link between robustness to noise and to mutation, since robustness can be defined by the sharpness of the distribution of the phenotype. Next, the proportionality between the variances is demonstrated to also hold over expressions of different genes (phenotypic traits) when the system acquires robustness through the evolution. Then, evolution under environmental variation is numerically investigated and it is found that both the adaptability to a novel environment and the robustness are made compatible when a certain degree of phenotypic fluctuations exists due to noise. The highest adaptability is achieved at a certain noise level at which the gene expression dynamics are near the critical state to lose the robustness. Based on our results, we revisit Waddington's canalization and genetic assimilation with regard to the two types of phenotypic fluctuations.

  17. The Evolution of Human Intelligence and the Coefficient of Additive Genetic Variance in Human Brain Size

    Science.gov (United States)

    Miller, Geoffrey F.; Penke, Lars

    2007-01-01

    Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…

  18. Estimates of array and pool-construction variance for planning efficient DNA-pooling genome wide association studies

    Directory of Open Access Journals (Sweden)

    Earp Madalene A

    2011-11-01

    Full Text Available Abstract Background Until recently, genome-wide association studies (GWAS have been restricted to research groups with the budget necessary to genotype hundreds, if not thousands, of samples. Replacing individual genotyping with genotyping of DNA pools in Phase I of a GWAS has proven successful, and dramatically altered the financial feasibility of this approach. When conducting a pool-based GWAS, how well SNP allele frequency is estimated from a DNA pool will influence a study's power to detect associations. Here we address how to control the variance in allele frequency estimation when DNAs are pooled, and how to plan and conduct the most efficient well-powered pool-based GWAS. Methods By examining the variation in allele frequency estimation on SNP arrays between and within DNA pools we determine how array variance [var(earray] and pool-construction variance [var(econstruction] contribute to the total variance of allele frequency estimation. This information is useful in deciding whether replicate arrays or replicate pools are most useful in reducing variance. Our analysis is based on 27 DNA pools ranging in size from 74 to 446 individual samples, genotyped on a collective total of 128 Illumina beadarrays: 24 1M-Single, 32 1M-Duo, and 72 660-Quad. Results For all three Illumina SNP array types our estimates of var(earray were similar, between 3-4 × 10-4 for normalized data. Var(econstruction accounted for between 20-40% of pooling variance across 27 pools in normalized data. Conclusions We conclude that relative to var(earray, var(econstruction is of less importance in reducing the variance in allele frequency estimation from DNA pools; however, our data suggests that on average it may be more important than previously thought. We have prepared a simple online tool, PoolingPlanner (available at http://www.kchew.ca/PoolingPlanner/, which calculates the effective sample size (ESS of a DNA pool given a range of replicate array values. ESS can

  19. EMPIRICAL COMPARISON OF VARIOUS APPROXIMATE ESTIMATORS OF THE VARIANCE OF HORVITZ THOMPSON ESTIMATOR UNDER SPLIT METHOD OF SAMPLING

    Directory of Open Access Journals (Sweden)

    Neeraj Tiwari

    2014-06-01

    Full Text Available Under inclusion probability proportional to size (IPPS sampling, the exact secondorder inclusion probabilities are often very difficult to obtain, and hence variance of the Horvitz- Thompson estimator and Sen-Yates-Grundy estimate of the variance of Horvitz-Thompson estimator are difficult to compute. Hence the researchers developed some alternative variance estimators based on approximations of the second-order inclusion probabilities in terms of the first order inclusion probabilities. We have numerically compared the performance of the various alternative approximate variance estimators using the split method of sample selection

  20. How large are actor and partner effects of personality on relationship satisfaction? The importance of controlling for shared method variance.

    Science.gov (United States)

    Orth, Ulrich

    2013-10-01

    Previous research suggests that the personality of a relationship partner predicts not only the individual's own satisfaction with the relationship but also the partner's satisfaction. Based on the actor-partner interdependence model, the present research tested whether actor and partner effects of personality are biased when the same method (e.g., self-report) is used for the assessment of personality and relationship satisfaction and, consequently, shared method variance is not controlled for. Data came from 186 couples, of whom both partners provided self- and partner reports on the Big Five personality traits. Depending on the research design, actor effects were larger than partner effects (when using only self-reports), smaller than partner effects (when using only partner reports), or of about the same size as partner effects (when using self- and partner reports). The findings attest to the importance of controlling for shared method variance in dyadic data analysis.

  1. Upper Bound of the Generalized p Value for the Population Variances of Lognormal Distributions with Known Coefficients of Variation

    Directory of Open Access Journals (Sweden)

    Rada Somkhuean

    2017-01-01

    Full Text Available This paper presents an upper bound for each of the generalized p values for testing the one population variance, the difference between two population variances, and the ratio of population variances for lognormal distribution when coefficients of variation are known. For each of the proposed generalized p values, we derive a closed form expression of the upper bound of the generalized p value. Numerical computations illustrate the theoretical results.

  2. Simulation of Longitudinal Exposure Data with Variance-Covariance Structures Based on Mixed Models

    Science.gov (United States)

    2013-01-01

    subjects ( intersubject ) and that within subjects (intrasubject). Then, we can model several types of correlations within each subject as necessary, to...discriminates intersubject and intrasubject variances, by splitting εij into two terms: yij =μ+ bi + eij,bi ∼ N(0,σ 2b ),eij ∼ N ( 0,σ 2e ) , (2) where bi is the...1 ρ ρ2 ρ3 ρ 1 ρ ρ2 ρ2 ρ 1 ρ ρ3 ρ2 ρ 1 ⎞⎟⎟⎟⎟⎟⎠ . (5) Among the two matrices in Equation (5), the first one defines intersubject variances

  3. An empirical study of statistical properties of variance partition coefficients for multi-level logistic regression models

    Science.gov (United States)

    Li, J.; Gray, B.R.; Bates, D.M.

    2008-01-01

    Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.

  4. A Program to Perform Analyses of Variance for Data from Round-robin Experiments

    Science.gov (United States)

    Gleason, John R.

    1976-01-01

    A round-robin experiment involves observation of all possible pairs of subjects within each experimental condition. A program is described which performs analyses of variance for such data. Output includes an ANOVA summary table, exact or quasi-F statistics for tests of various hypotheses, and least squares estimates of relevant parameters.…

  5. The modified Black-Scholes model via constant elasticity of variance for stock options valuation

    Science.gov (United States)

    Edeki, S. O.; Owoloko, E. A.; Ugbebor, O. O.

    2016-02-01

    In this paper, the classical Black-Scholes option pricing model is visited. We present a modified version of the Black-Scholes model via the application of the constant elasticity of variance model (CEVM); in this case, the volatility of the stock price is shown to be a non-constant function unlike the assumption of the classical Black-Scholes model.

  6. A Mean-Variance Diagnosis of the Financial Crisis: International Diversification and Safe Havens

    Directory of Open Access Journals (Sweden)

    Alexander Eptas

    2010-12-01

    Full Text Available We use mean-variance analysis with short selling constraints to diagnose the effects of the recent global financial crisis by evaluating the potential benefits of international diversification in the search for ‘safe havens’. We use stock index data for a sample of developed, advanced-emerging and emerging countries. ‘Text-book’ results are obtained for the pre-crisis analysis with the optimal portfolio for any risk-averse investor being obtained as the tangency portfolio of the All-Country portfolio frontier. During the crisis there is a disjunction between bank lending and stock markets revealed by negative average returns and an absence of any empirical Capital Market Line. Israel and Colombia emerge as the safest havens for any investor during the crisis. For Israel this may reflect the protection afforded by special trade links and diaspora support, while for Colombia we speculate that this reveals the impact on world financial markets of the demand for cocaine.

  7. Adding a Parameter Increases the Variance of an Estimated Regression Function

    Science.gov (United States)

    Withers, Christopher S.; Nadarajah, Saralees

    2011-01-01

    The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…

  8. Application of variance reduction technique to nuclear transmutation system driven by accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)

  9. A comprehensive study of the delay vector variance method for quantification of nonlinearity in dynamical systems

    Science.gov (United States)

    Mandic, D. P.; Ryan, K.; Basu, B.; Pakrashi, V.

    2016-01-01

    Although vibration monitoring is a popular method to monitor and assess dynamic structures, quantification of linearity or nonlinearity of the dynamic responses remains a challenging problem. We investigate the delay vector variance (DVV) method in this regard in a comprehensive manner to establish the degree to which a change in signal nonlinearity can be related to system nonlinearity and how a change in system parameters affects the nonlinearity in the dynamic response of the system. A wide range of theoretical situations are considered in this regard using a single degree of freedom (SDOF) system to obtain numerical benchmarks. A number of experiments are then carried out using a physical SDOF model in the laboratory. Finally, a composite wind turbine blade is tested for different excitations and the dynamic responses are measured at a number of points to extend the investigation to continuum structures. The dynamic responses were measured using accelerometers, strain gauges and a Laser Doppler vibrometer. This comprehensive study creates a numerical and experimental benchmark for structurally dynamical systems where output-only information is typically available, especially in the context of DVV. The study also allows for comparative analysis between different systems driven by the similar input. PMID:26909175

  10. Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River

    Science.gov (United States)

    Whitcher, B.; Byers, S. D.; Guttorp, P.; Percival, D. B.

    2002-05-01

    We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622-1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622-1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years.

  11. Penerapan Model Multivariat Analisis of Variance dalam Mengukur Persepsi Destinasi Wisata

    Directory of Open Access Journals (Sweden)

    Robert Tang Herman

    2012-05-01

    Full Text Available The purpose of this research is to provide conceptual and infrastructure tools for Dinas Pariwisata DKI Jakarta to improve their capabilities for evaluating business performance based on market responsiveness. Capturing market responsiveness is the initial research to make industry mapping. Research steps started with secondary research to build data classification system. The second is primary research by collecting the data from market research. Data sources for secondary data were collected from Dinas Pariwisata DKI, while the primary data were collected from survey method using quetionaires addressed to the whole market. Then, analyze the data colleted with multivariate analysis of variance to develop the mapping. The result of cluster analysis distinguishes the potential market based on their responses to the industry classification, make the classification system, find the gaps and how important are they, and the another issue related to the role of the mapping system. So, this mapping system will help Dinas Pariwisata DKI to improve capabilities and the business performance based on the market responsiveness and, which is the potential market for each specific classification, know what their needs, wants and demand from that classification. This research contribution can be used to give the recommendation to Dinas Pariwisata DKI to deliver what market needs and wants to all the tourism place based on this classification resulting, to develop the market growth estimation; and for the long term is to improve the economic and market growth.

  12. Variance of Fluctuating Radar Echoes from Thermal Noise and Randomly Distributed Scatterers

    Directory of Open Access Journals (Sweden)

    Marco Gabella

    2014-02-01

    Full Text Available In several cases (e.g., thermal noise, weather echoes, …, the incoming signal to a radar receiver can be assumed to be Rayleigh distributed. When estimating the mean power from the inherently fluctuating Rayleigh signals, it is necessary to average either the echo power intensities or the echo logarithmic levels. Until now, it has been accepted that averaging the echo intensities provides smaller variance values, for the same number of independent samples. This has been known for decades as the implicit consequence of two works that were presented in the open literature. The present note deals with the deriving of analytical expressions of the variance of the two typical estimators of mean values of echo power, based on echo intensities and echo logarithmic levels. The derived expressions explicitly show that the variance associated to an average of the echo intensities is lower than that associated to an average of logarithmic levels. Consequently, it is better to average echo intensities rather than logarithms. With the availability of digital IF receivers, which facilitate the averaging of echo power, the result has a practical value. As a practical example, the variance obtained from two sets of noise samples, is compared with that predicted with the analytical expression derived in this note (Section 3: the measurements and theory show good agreement.

  13. A Test for Mean-Variance Efficiency of a given Portfolio under Restrictions

    NARCIS (Netherlands)

    G.T. Post (Thierry)

    2005-01-01

    textabstractThis study proposes a test for mean-variance efficiency of a given portfolio under general linear investment restrictions. We introduce a new definition of pricing error or “alpha” and as an efficiency measure we propose to use the largest positive alpha for any vertex of the portfolio p

  14. An evaluation of how downscaled climate data represents historical precipitation characteristics beyond the means and variances

    Science.gov (United States)

    Kusangaya, Samuel; Toucher, Michele L. Warburton; van Garderen, Emma Archer; Jewitt, Graham P. W.

    2016-09-01

    Precipitation is the main driver of the hydrological cycle. For climate change impact analysis, use of downscaled precipitation, amongst other factors, determines accuracy of modelled runoff. Precipitation is, however, considerably more difficult to model than temperature, largely due to its high spatial and temporal variability and its nonlinear nature. Due to such qualities of precipitation, a key challenge for water resources management is thus how to incorporate potentially significant but highly uncertain precipitation characteristics when modelling potential changes in climate for water resources management in order to support local management decisions. Research undertaken here was aimed at evaluating how downscaled climate data represented the underlying historical precipitation characteristics beyond the means and variances. Using the uMngeni Catchment in KwaZulu-Natal, South Africa as a case study, the occurrence of rainfall, rainfall threshold events and wet dry sequence was analysed for current climate (1961-1999). The number of rain days with daily rainfall > 1 mm, > 5 mm, > 10 mm, > 20 mm and > 40 mm for each of the 10 selected climate models was, compared to the number of rain days at 15 rain stations. Results from graphical and statistical analysis indicated that on a monthly basis rain days are over estimated for all climate models. Seasonally, the number of rain days were overestimated in autumn and winter and underestimated in summer and spring. The overall conclusion was that despite the advancement in downscaling and the improved spatial scale for a better representation of the climate variables, such as rainfall for use in hydrological impact studies, downscaled rainfall data still does not simulate well some important rainfall characteristics, such as number of rain days and wet-dry sequences. This is particularly critical, since, whilst for climatologists, means and variances might be simulated well in downscaled GCMs, for hydrologists

  15. A comparison of vertical velocity variance measurements from wind profiling radars and sonic anemometers

    Science.gov (United States)

    McCaffrey, Katherine; Bianco, Laura; Johnston, Paul; Wilczak, James M.

    2017-03-01

    Observations of turbulence in the planetary boundary layer are critical for developing and evaluating boundary layer parameterizations in mesoscale numerical weather prediction models. These observations, however, are expensive and rarely profile the entire boundary layer. Using optimized configurations for 449 and 915 MHz wind profiling radars during the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA), improvements have been made to the historical methods of measuring vertical velocity variance through the time series of vertical velocity, as well as the Doppler spectral width. Using six heights of sonic anemometers mounted on a 300 m tower, correlations of up to R2 = 0. 74 are seen in measurements of the large-scale variances from the radar time series and R2 = 0. 79 in measurements of small-scale variance from radar spectral widths. The total variance, measured as the sum of the small and large scales, agrees well with sonic anemometers, with R2 = 0. 79. Correlation is higher in daytime convective boundary layers than nighttime stable conditions when turbulence levels are smaller. With the good agreement with the in situ measurements, highly resolved profiles up to 2 km can be accurately observed from the 449 MHz radar and 1 km from the 915 MHz radar. This optimized configuration will provide unique observations for the verification and improvement to boundary layer parameterizations in mesoscale models.

  16. Selection for uniformity in livestock by exploiting genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2008-01-01

    In some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters, b

  17. Implementation of variance-reduction techniques for Monte Carlo nuclear logging calculations with neutron sources

    NARCIS (Netherlands)

    Maucec, M

    2005-01-01

    Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th

  18. Rapid Divergence of Genetic Variance-Covariance Matrix within a Natural Population

    NARCIS (Netherlands)

    Doroszuk, A.; Wojewodzic, M.W.; Gort, G.; Kammenga, J.E.

    2008-01-01

    The matrix of genetic variances and covariances (G matrix) represents the genetic architecture of multiple traits sharing developmental and genetic processes and is central for predicting phenotypic evolution. These predictions require that the G matrix be stable. Yet the timescale and conditions pr

  19. Variance component estimations and allocation of resources for breeding sweetpotato under East African conditions

    NARCIS (Netherlands)

    Grüneberg, W.J.; Abidin, P.E.; Ndolo, P.; Pereira, C.A.; Hermann, M.

    2004-01-01

    In Africa, average sweetpotato storage root yields are low and breeding is considered to be an important factor in increasing production. The objectives of this study were to obtain variance component estimations for sweetpotato in this region of the world and then use these to determine the efficie

  20. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    Science.gov (United States)

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  1. Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System

    Science.gov (United States)

    Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.

    2016-06-01

    Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading

  2. A Mean-Variance Explanation of FDI Flows to Developing Countries

    DEFF Research Database (Denmark)

    Sunesen, Eva Rytter

    country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

  3. Fluctuation spectra and variances in convective turbulent boundary layers: A reevaluation of old models

    Science.gov (United States)

    Yaglom, A. M.

    1994-02-01

    Most of the existing theoretical models for statistical characteristics of turbulence in convective boundary layers are based on the similarity theory by Monin and Obukhov [Trudy Geofiz. Inst. Akad. Nauk SSSR 24(151), 163 (1954)], and its further refinements. A number of such models was recently reconsidered and partially compared with available data by Kader and Yaglom [J. Fluid Mech. 212, 637 (1990); Turbulence and Coherent Structures (Kluwer, Dordrecht, 1991), p. 387]. However, in these papers the data related to variances =σ2u and =σ2v of horizontal velocity components were not considered at all, and the data on horizontal velocity spectra Eu(k) and Ev(k) were used only for a restricted range of not too small wave numbers k. This is connected with findings by Kaimal et al. [Q. J. R. Meteorol. Soc. 98, 563 (1972)] and Panofsky et al. [Boundary-Layer Meteorol. 11, 355 (1977)], who showed that the Monin-Obukhov theory cannot be applied to velocity variance σ2u and σ2v and to spectra Eu(k) and Ev(k) in energy ranges of wave numbers. It is shown in this paper that a simple generalization of the traditional similarity theory, which takes into account the influence of large-scale organized structures, leads to new models of horizontal velocity variances and spectra, which describe the observed deviations of these characteristics from the predictions based on the Monin-Obukhov theory, and agree satisfactorily with the available data. The application of the same approach to the temperature spectrum and variance explains why the observed deviations of temperature spectrum in convective boundary layers from the Monin-Obukhov similarity does not lead to marked violations of the same similarity as applied to temperature variance =σ2t.

  4. 基于BEKK方差模型的干散货航运市场间波动溢出效应分析%Analysis of Volatility Spillover Effect Among Dry Bulk Shipping Markets Based on BEKK Variance Model

    Institute of Scientific and Technical Information of China (English)

    范永辉; 杨华龙; 刘金霞

    2012-01-01

    In view of the interactive relationship among handysize, panamax and capsize dry bulk shipping markets, the dry bulk freight indexes of different vessel types issued by the Baltic Shipping Exchange were employed and the volatility spillover effect among three dry bulk shipping markets of different vessel types was studied by BEKK variance model of multivariate GARCH. It is pointed that capesize dry bulk shipping market has volatility spillover effect on handysize dry bulk shipping market and panamax dry bulk shipping market while handysize dry bulk shipping market and panamax dry bulk shipping market have no volatility spillover effect on capesize dry bulk shipping market, and there is a two-way volatility spillover effect between handysize dry bulk shipping market and panamax dry bulk shipping market. Wald test verified the correctness of above inference. The results can provide references for dry bulk shipping operators to avoid risk of market volatility.%针对灵便型、巴拿马型和海岬型干散货航运市场间的互动关系问题,选取波罗的海干散货运价指数,应用多元广义自回归条件异方差中的BEKK方差分析模型,研究了干散货航运市场间的波动溢出效应.发现海岬型干散货航运市场对灵便型和巴拿马型干散货航运市场存在波动溢出效应,而灵便型和巴拿马型干散货航运市场对海岬型干散货航运市场不存在波动溢出效应,灵便型干散货航运市场和巴拿马型干散货航运市场之间存在双向波动溢出效应,Wald检验验证了上述结论的正确性.从而可为航运经营者规避干散货航运市场波动风险提供决策参考.

  5. Researchers' choice of the number and range of levels in experiments affects the resultant variance-accounted-for effect size.

    Science.gov (United States)

    Okada, Kensuke; Hoshino, Takahiro

    2016-08-08

    In psychology, the reporting of variance-accounted-for effect size indices has been recommended and widely accepted through the movement away from null hypothesis significance testing. However, most researchers have paid insufficient attention to the fact that effect sizes depend on the choice of the number of levels and their ranges in experiments. Moreover, the functional form of how and how much this choice affects the resultant effect size has not thus far been studied. We show that the relationship between the population effect size and number and range of levels is given as an explicit function under reasonable assumptions. Counterintuitively, it is found that researchers may affect the resultant effect size to be either double or half simply by suitably choosing the number of levels and their ranges. Through a simulation study, we confirm that this relation also applies to sample effect size indices in much the same way. Therefore, the variance-accounted-for effect size would be substantially affected by the basic research design such as the number of levels. Simple cross-study comparisons and a meta-analysis of variance-accounted-for effect sizes would generally be irrational unless differences in research designs are explicitly considered.

  6. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.

    1998-03-01

    `MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  7. Ensemble X-ray variability of Active Galactic Nuclei. II. Excess Variance and updated Structure Function

    CERN Document Server

    Vagnetti, F; Antonucci, M; Paolillo, M; Serafinelli, R

    2016-01-01

    Most investigations of the X-ray variability of active galactic nuclei (AGN) have been concentrated on the detailed analyses of individual, nearby sources. A relatively small number of studies have treated the ensemble behaviour of the more general AGN population in wider regions of the luminosity-redshift plane. We want to determine the ensemble variability properties of a rich AGN sample, called Multi-Epoch XMM Serendipitous AGN Sample (MEXSAS), extracted from the latest release of the XMM-Newton Serendipitous Source Catalogue, with redshift between 0.1 and 5, and X-ray luminosities, in the 0.5-4.5 keV band, between 10^{42} and 10^{47} erg/s. We caution on the use of the normalised excess variance (NXS), noting that it may lead to underestimate variability if used improperly. We use the structure function (SF), updating our previous analysis for a smaller sample. We propose a correction to the NXS variability estimator, taking account of the light-curve duration in the rest-frame, on the basis of the knowle...

  8. On the stability and spatiotemporal variance distribution of salinity in the upper ocean

    Science.gov (United States)

    O'Kane, Terence J.; Monselesan, Didier P.; Maes, Christophe

    2016-06-01

    Despite recent advances in ocean observing arrays and satellite sensors, there remains great uncertainty in the large-scale spatial variations of upper ocean salinity on the interannual to decadal timescales. Consonant with both broad-scale surface warming and the amplification of the global hydrological cycle, observed global multidecadal salinity changes typically have focussed on the linear response to anthropogenic forcing but not on salinity variations due to changes in the static stability and or variability due to the intrinsic ocean or internal climate processes. Here, we examine the static stability and spatiotemporal variability of upper ocean salinity across a hierarchy of models and reanalyses. In particular, we partition the variance into time bands via application of singular spectral analysis, considering sea surface salinity (SSS), the Brunt Väisälä frequency (N2), and the ocean salinity stratification in terms of the stabilizing effect due to the haline part of N2 over the upper 500m. We identify regions of significant coherent SSS variability, either intrinsic to the ocean or in response to the interannually varying atmosphere. Based on consistency across models (CMIP5 and forced experiments) and reanalyses, we identify the stabilizing role of salinity in the tropics—typically associated with heavy precipitation and barrier layer formation, and the role of salinity in destabilizing upper ocean stratification in the subtropical regions where large-scale density compensation typically occurs.

  9. Spatiotemporal characterization of Ensemble Prediction Systems – the Mean-Variance of Logarithms (MVL diagram

    Directory of Open Access Journals (Sweden)

    J. Fernández

    2008-02-01

    Full Text Available We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.

  10. Patient safety culture lives in departments and wards: Multilevel partitioning of variance in patient safety culture

    Directory of Open Access Journals (Sweden)

    Hofoss Dag

    2010-03-01

    Full Text Available Abstract Background Aim of study was to document 1 that patient safety culture scores vary considerably by hospital department and ward, and 2 that much of the variation is across the lowest level organizational units: the wards. Setting of study: 500-bed Norwegian university hospital, September-December 2006. Methods Data collected from 1400 staff by (the Norwegian version of the generic version of the Safety Attitudes Questionnaire (SAQ Short Form 2006. Multilevel analysis by MLwiN version 1.10. Results Considerable parts of the score variations were at the ward and department levels. More organization level variation was seen at the ward level than at the department level. Conclusions Patient safety culture improvement efforts should not be limited to all-hospital interventions or interventions aimed at entire departments, but include involvement at the ward level, selectively aimed at low-scoring wards. Patient safety culture should be studied as closely to the patient as possible. There may be such a thing as "hospital safety culture" and the variance across hospital departments indicates the existence of department safety cultures. However, neglecting the study of patient safety culture at the ward level will mask important local variations. Safety culture research and improvement should not stop at the lowest formal level of the hospital (wards, out-patient clinics, ERs, but proceed to collect and analyze data on the micro-units within them.

  11. Cup anemometer response to the wind turbulence-measurement of the horizontal wind variance

    Science.gov (United States)

    Yahaya, S.; Frangi, J.

    2004-10-01

    This paper presents some dynamic characteristics of an opto-electronic cup anemometer model in relation to its response to the wind turbulence. It is based on experimental data of the natural wind turbulence measured both by an ultrasonic anemometer and two samples of the mentioned cup anemometer. The distance constants of the latter devices measured in a wind tunnel are in good agreement with those determined by the spectral analysis method proposed in this study. In addition, the study shows that the linear compensation of the cup anemometer response, beyond the cutoff frequency, is limited to a given frequency, characteristic of the device. Beyond this frequency, the compensation effectiveness relies mainly on the wind characteristics, particularly the direction variability and the horizontal turbulence intensity. Finally, this study demonstrates the potential of fast cup anemometers to measure some turbulence parameters (like wind variance) with errors of the magnitude as those deriving from the mean speed measurements. This result proves that fast cup anemometers can be used to assess some turbulence parameters, especially for long-term measurements in severe climate conditions (icing, snowing or sandy storm weathers).

  12. Using adapted budget cost variance techniques to measure the impact of Lean – based on empirical findings in Lean case studies

    DEFF Research Database (Denmark)

    Kristensen, Thomas Borup

    2015-01-01

    the requirements of Lean companies. In general all these developments should enhance the measurement of cost improvements on both direct costs and indirect costs made by implementing Lean. The adaptions and techniques presented can be used by other Lean companies, because they are highly applicable and can easily...... excellent Lean performing companies and their development of budget variance analysis techniques. Based on these empirical findings techniques are presented to calculate cost and cost variances in the Lean companies. First of all, a cost variance is developed to calculate the Lean cost benefits within....... This is needed in Lean as the benefits are often created over multiple periods and not just within one budget period. Traditional cost variance techniques are not able to trace these effects. Moreover, Time-driven ABC is adapted to fit the measurement of Lean improvement outside manufacturing and facilitate...

  13. Meta分析模型权重变异比较及其与I2的相关性%Comparison of variance of weights in meta-analysis models and its correlation with I-square

    Institute of Scientific and Technical Information of China (English)

    石修权; 刘丹; 刘俊

    2011-01-01

    Objective:To explore the difference in variance of weights (sw2) between two models and its correlation with I-square of heterogeneity (I2) in meta-analyses. Methods:Weights and P were extracted from meta-analyses which were published in recent three years,then sw2 were computed.The difference in sw2 was compared between two models and the correlation was investigated between sw2 sw2 and I2. Results:sw2 in random-effect model was lower than that in fixed-effect model (t=2.739,P=0.015) ;and the correlation coefficient between sw2 and I2 was -0.505 and P-value was 0.039. Conclusion:sw2 in random-effect model is significantly lower than fixed-effect model and a negative correlation between sw2 and P exists.It is important to understand how to assign weight in two models,even in the study and suitable explanation of the results in meta-analyses if the reasons of sw2 difference and correlation between P and sw2 are thoroughly searched.%目的:探讨Meta分析中2种模型的权重变异(方差)的差异及其与异质性I2的关系.方法:检索近3年公开发表的Meta分析论文,提取每篇纳入文献的权重以及I2,并计算权重的变异度(方差值).比较不同模型下权重方差的差异并考察权重方差与I2的相关性.结果:随机效应模型的权重方差小于固定效应模型(t=2.739,P=0.015);权重方差与I2之间r=-0.505,P=0.039.结论:随机效应模型中原始研究的权重变异低于固定效应模型,且权重方差与I2间呈负相关.深入探讨权重变异差异及其与异质性的关系,对正确理解2种模型权重赋予的原则,甚至对Meta分析的学习和结果的合理解释均具有重要意义.

  14. Estimation of the proportion of genetic variance explained by molecular markers

    OpenAIRE

    Bearzoti,Eduardo; Vencovsky, Roland

    1998-01-01

    Estimation of the proportion of genetic variance explained by molecular markers (p) plays an important role in basic studies of quantitative traits, as well as in marker-assisted selection (MAS), if the selection index proposed by Lande and Thompson (Genetics 124: 743-756, 1990) is used. Frequently, the coefficient of determination (R2) is used to account for this proportion. In the present study, a simple estimator of p is presented, which is applicable when a multiple regression approach is...

  15. On the origins of signal variance in FMRI of the human midbrain at high field.

    Directory of Open Access Journals (Sweden)

    Robert L Barry

    Full Text Available Functional Magnetic Resonance Imaging (fMRI in the midbrain at 7 Tesla suffers from unexpectedly low temporal signal to noise ratio (TSNR compared to other brain regions. Various methodologies were used in this study to quantitatively identify causes of the noise and signal differences in midbrain fMRI data. The influence of physiological noise sources was examined using RETROICOR, phase regression analysis, and power spectral analyses of contributions in the respiratory and cardiac frequency ranges. The impact of between-shot phase shifts in 3-D multi-shot sequences was tested using a one-dimensional (1-D phase navigator approach. Additionally, the effects of shared noise influences between regions that were temporally, but not functionally, correlated with the midbrain (adjacent white matter and anterior cerebellum were investigated via analyses with regressors of 'no interest'. These attempts to reduce noise did not improve the overall TSNR in the midbrain. In addition, the steady state signal and noise were measured in the midbrain and the visual cortex for resting state data. We observed comparable steady state signals from both the midbrain and the cortex. However, the noise was 2-3 times higher in the midbrain relative to the cortex, confirming that the low TSNR in the midbrain was not due to low signal but rather a result of large signal variance. These temporal variations did not behave as known physiological or other noise sources, and were not mitigated by conventional strategies. Upon further investigation, resting state functional connectivity analysis in the midbrain showed strong intrinsic fluctuations between homologous midbrain regions. These data suggest that the low TSNR in the midbrain may originate from larger signal fluctuations arising from functional connectivity compared to cortex, rather than simply reflecting physiological noise.

  16. Using Allan Variance to Analyze the Zero-differenced Stochastic Model Characteristics of GPS

    Directory of Open Access Journals (Sweden)

    ZHANG Xiaohong

    2015-02-01

    Full Text Available The estimation criteria for solving parameters in zero-differenced GPS positioning is that observations obey Gaussian white noise distribution. But a number of pioneering studies point out that the white noise would be damaged by satellites errors, propagation errors, station environment errors and so on. Meanwhile, un-modeling errors also have adverse effects. These errors not only undermine the assumption estimation criteria, and some non-white noises are likely to be absorbed by state parameters. In result, the accuracy of estimates is influenced. This paper regards white noise, colored noise and un-modeling errors as ZD stochastic model of GPS. Then the Allan variance method is proposed to analyze the posteriori residuals which can represent the Stochastic characteristics of GPS data. Noise component and parameters are mainly investigated. The result shows GPS noise behaves as WN plus GM. The phase and pseudorange WN is 2.392 mm and 0.936 m respectively, GM process noise is 4.450 mm/√s and 0.833 m/√s respectively, correlation time is 52.074 s and 14.737 s respectively. It is found that the phase GM component is associated with satellite, but the rest is associated with station. A number of analysis indicate that the ZD stochastic model characteristics of GPS obeys non-Gaussian white noise distribution and is to be refined.

  17. Simultaneous estimation of noise variance and number of peaks in Bayesian spectral deconvolution

    CERN Document Server

    Tokuda, Satoru; Okada, Masato

    2016-01-01

    Heuristic identification of peaks from noisy complex spectra often leads to misunderstanding physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multi-peak spectra into single peaks statistically and is constructed in two steps. The first step is estimating both noise variance and number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multi-peak models are nonlinear and hierarchical. Our framework enables escaping from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy. We discuss a simulation demonstrating how efficient our framework is and show that estimating both noise variance and number of peaks prevents overfitting, overpenalizing, and misun...

  18. Stud identity among female-born youth of color: joint conceptualizations of gender variance and same-sex sexuality.

    Science.gov (United States)

    Kuper, Laura E; Wright, Laurel; Mustanski, Brian

    2014-01-01

    Little is known about the experiences of individuals who may fall under the umbrella of "transgender" but do not transition medically and/or socially. The impact of the increasingly widespread use of the term "transgender" itself also remains unclear. The authors present narratives from four female-born youth of color who report a history of identifying as a "stud." Through analysis of their processes of identity signification, the authors demonstrate how stud identity fuses aspects of gender and sexuality while providing an alternate way of making meaning of gender variance. As such, this identity has important implications for research and organizing centered on an LGBT-based identity framework.

  19. Consequences of Misspecifying Levels of Variance in Cross-Classified Longitudinal Data Structures.

    Science.gov (United States)

    Gilbert, Jennifer; Petscher, Yaacov; Compton, Donald L; Schatschneider, Chris

    2016-01-01

    The purpose of this study was to determine if modeling school and classroom effects was necessary in estimating passage reading growth across elementary grades. Longitudinal data from 8367 students in 2989 classrooms in 202 Reading First schools were used in this study and were obtained from the Progress Monitoring and Reporting Network maintained by the Florida Center for Reading Research. Oral reading fluency (ORF) was assessed four times per school year. Five growth models with varying levels of data (student, classroom, and school) were estimated in order to determine which structures were necessary to correctly partition variance and accurately estimate standard errors for growth parameters. Because the results illustrate that not modeling higher-level clustering inflated lower-level variance estimates and in some cases led to biased standard errors, the authors recommend the practice of including classroom cross-classification and school nesting when predicting longitudinal student outcomes.

  20. Evaluation of area of review variance opportunities for the East Texas field. Annual report

    Energy Technology Data Exchange (ETDEWEB)

    Warner, D.L.; Koederitz, L.F.; Laudon, R.C.; Dunn-Norman, S.

    1995-05-01

    The East Texas oil field, discovered in 1930 and located principally in Gregg and Rusk Counties, is the largest oil field in the conterminous United States. Nearly 33,000 wells are known to have been drilled in the field. The field has been undergoing water injection for pressure maintenance since 1938. As of today, 104 Class II salt-water disposal wells, operated by the East Texas Salt Water Disposal Company, are returning all produced water to the Woodbine producing reservoir. About 69 of the presently existing wells have not been subjected to U.S. Environmental Protection Agency Area-of-Review (AOR) requirements. A study has been carried out of opportunities for variance from AORs for these existing wells and for new wells that will be constructed in the future. The study has been based upon a variance methodology developed at the University of Missouri-Rolla under sponsorship of the American Petroleum Institute and in coordination with the Ground Water Protection Council. The principal technical objective of the study was to determine if reservoir pressure in the Woodbine producing reservoir is sufficiently low so that flow of salt-water from the Woodbine into the Carrizo-Wilcox ground water aquifer is precluded. The study has shown that the Woodbine reservoir is currently underpressured relative to the Carrizo-Wilcox and will remain so over the next 20 years. This information provides a logical basis for a variance for the field from performing AORs.

  1. The effect of errors-in-variables on variance component estimation

    Science.gov (United States)

    Xu, Peiliang

    2016-08-01

    Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain

  2. Foraging trait (co)variances in stickleback evolve deterministically and do not predict trajectories of adaptive diversification.

    Science.gov (United States)

    Berner, Daniel; Stutz, William E; Bolnick, Daniel I

    2010-08-01

    How does natural selection shape the structure of variance and covariance among multiple traits, and how do (co)variances influence trajectories of adaptive diversification? We investigate these pivotal but open questions by comparing phenotypic (co)variances among multiple morphological traits across 18 derived lake-dwelling populations of threespine stickleback, and their marine ancestor. Divergence in (co)variance structure among populations is striking and primarily attributable to shifts in the variance of a single key foraging trait (gill raker length). We then relate this divergence to an ecological selection proxy, to population divergence in trait means, and to the magnitude of sexual dimorphism within populations. This allows us to infer that evolution in (co)variances is linked to variation among habitats in the strength of resource-mediated disruptive selection. We further find that adaptive diversification in trait means among populations has primarily involved shifts in gill raker length. The direction of evolutionary trajectories is unrelated to the major axes of ancestral trait (co)variance. Our study demonstrates that natural selection drives both means and (co)variances deterministically in stickleback, and strongly challenges the view that the (co)variance structure biases the direction of adaptive diversification predictably even over moderate time spans.

  3. A comparison of two methods for detecting abrupt changes in the variance of climatic time series

    CERN Document Server

    Rodionov, Sergei

    2016-01-01

    Two methods for detecting abrupt shifts in the variance, Integrated Cumulative Sum of Squares (ICSS) and Sequential Regime Shift Detector (SRSD), have been compared on both synthetic and observed time series. In Monte Carlo experiments, SRSD outperformed ICSS in the overwhelming majority of the modelled scenarios with different sequences of variance regimes. The SRSD advantage was particularly apparent in the case of outliers in the series. When tested on climatic time series, in most cases both methods detected the same change points in the longer series (252-787 monthly values). The only exception was the Arctic Ocean SST series, when ICSS found one extra change point that appeared to be spurious. As for the shorter time series (66-136 yearly values), ICSS failed to detect any change points even when the variance doubled or tripled from one regime to another. For these time series, SRSD is recommended. Interestingly, all the climatic time series tested, from the Arctic to the Tropics, had one thing in commo...

  4. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    Science.gov (United States)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  5. Label-free imaging of developing vasculature in zebrafish with phase variance optical coherence microscopy

    Science.gov (United States)

    Chen, Yu; Fingler, Jeff; Trinh, Le A.; Fraser, Scott E.

    2016-03-01

    A phase variance optical coherence microscope (pvOCM) has been created to visualize blood flow in the vasculature of zebrafish embryos, without using exogenous labels. The pvOCM imaging system has axial and lateral resolutions of 2 μm in tissue, and imaging depth of more than 100 μm. Imaging of 2-5 days post-fertilization zebrafish embryos identified the detailed structures of somites, spinal cord, gut and notochord based on intensity contrast. Visualization of the blood flow in the aorta, veins and intersegmental vessels was achieved with phase variance contrast. The pvOCM vasculature images were confirmed with corresponding fluorescence microscopy of a zebrafish transgene that labels the vasculature with green fluorescent protein. The pvOCM images also revealed functional information of the blood flow activities that is crucial for the study of vascular development.

  6. An Investigation of the Sequential Sampling Method for Crossdocking Simulation Output Variance Reduction

    CERN Document Server

    Adewunmi, Adrian; Byrne, Mike

    2008-01-01

    This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.

  7. Impulse Noise Filtering Using Robust Pixel-Wise S-Estimate of Variance

    Directory of Open Access Journals (Sweden)

    Nemanja I. Petrović

    2010-01-01

    Full Text Available A novel method for impulse noise suppression in images, based on the pixel-wise S-estimator, is introduced. The S-estimator is an alternative for the well-known robust estimate of variance MAD, which does not require a location estimate and hence is more appropriate for asymmetric distributions, frequently encountered in transient regions of the image. The proposed computationally efficient modification of a robust S-estimator of variance is successfully utilized in iterative scheme for impulse noise filtering. Another novelty is that the proposed iterative algorithm has automatic stopping criteria, also based on the pixel-wise S-estimator. Performances of the proposed filter are independent of the image content or noise concentration. The proposed filter outperforms all state-of-the-art filters included in a large comparison, both objectively (in terms of PSNR and MSSIM and subjectively.

  8. A NOVEL MULTICLASS SUPPORT VECTOR MACHINE ALGORITHM USING MEAN REVERSION AND COEFFICIENT OF VARIANCE

    Directory of Open Access Journals (Sweden)

    Bhusana Premanode

    2013-01-01

    Full Text Available Inaccuracy of a kernel function used in Support Vector Machine (SVM can be found when simulated with nonlinear and stationary datasets. To minimise the error, we propose a new multiclass SVM model using mean reversion and coefficient of variance algorithm to partition and classify imbalance in datasets. By introducing a series of test statistic, simulations of the proposed algorithm outperformed the performance of the SVM model without using multiclass SVM model.

  9. Efficient Option Pricing in Crisis Based on Dynamic Elasticity of Variance Model

    Directory of Open Access Journals (Sweden)

    Congyin Fan

    2016-01-01

    Full Text Available Market crashes often appear in daily trading activities and such instantaneous occurring events would affect the stock prices greatly. In an unstable market, the volatility of financial assets changes sharply, which leads to the fact that classical option pricing models with constant volatility coefficient, even stochastic volatility term, are not accurate. To overcome this problem, in this paper we put forward a dynamic elasticity of variance (DEV model by extending the classical constant elasticity of variance (CEV model. Further, the partial differential equation (PDE for the prices of European call option is derived by using risk neutral pricing principle and the numerical solution of the PDE is calculated by the Crank-Nicolson scheme. In addition, Kalman filtering method is employed to estimate the volatility term of our model. Our main finding is that the prices of European call option under our model are more accurate than those calculated by Black-Scholes model and CEV model in financial crashes.

  10. Role of Patient and Practice Characteristics in Variance of Treatment Quality in Type 2 Diabetes between General Practices

    NARCIS (Netherlands)

    Cho, Yeon Young; Sidorenkov, Grigory; Denig, Petra

    2016-01-01

    Background Accounting for justifiable variance is important for fair comparisons of treatment quality. The variance between general practices in treatment quality of type 2 diabetes (T2DM) patients may be attributed to the underlying patient population and practice characteristics. The objective of

  11. Conversations across Meaning Variance

    Science.gov (United States)

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  12. SIMULATION STUDY OF GENERALIZED MINIMUM VARIANCE CONTROL FOR AN EXTRACTION TURBINE

    Institute of Scientific and Technical Information of China (English)

    Shi Xiaoping

    2003-01-01

    In an extraction turbine, pressure of the extracted steam and rotate speed of the rotor are two important controlled quantities. The traditional linear state feedback control method is not perfect enough to control the two quantities accurately because of existence of nonlinearity and coupling. A generalized minimum variance control method is studied for an extraction turbine. Firstly, a nonlinear mathematical model of the control system about the two quantities is transformed into a linear system with two white noises. Secondly, a generalized minimum variance control law is applied to the system.A comparative simulation is done. The simulation results indicate that precision and dynamic quality of the regulating system under the new control law are both better than those under the state feedback control law.

  13. Modelling Changes in the Unconditional Variance of Long Stock Return Series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011......). The latter component is modelled by incorporating smooth changes so that the unconditional variance is allowed to evolve slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building...... show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all...

  14. What structural length scales can be detected by the spectral variance of a microscope image?

    OpenAIRE

    Cherkezyan, Lusik; Subramanian, Hariharan; Backman, Vadim

    2014-01-01

    A spectroscopic microscope, configured to detect interference spectra of backscattered light in the far zone, quantifies the statistics of refractive-index (RI) distribution via the spectral variance (Σ̃2) of the acquired bright-field image. Its sensitivity to subtle structural changes within weakly scattering, label-free media at subdiffraction scales shows great promise in fields from material science to medical diagnostics. We further investigate the length-scale sensitivity of Σ̃ and reve...

  15. A Study on the Chain Ratio-Type Estimator of Finite Population Variance

    Directory of Open Access Journals (Sweden)

    Yunusa Olufadi

    2014-01-01

    Full Text Available We suggest an estimator using two auxiliary variables for the estimation of the unknown population variance. The bias and the mean square error of the proposed estimator are obtained to the first order of approximations. In addition, the problem is extended to two-phase sampling scheme. After theoretical comparisons, as an illustration, a numerical comparison is carried out to examine the performance of the suggested estimator with several estimators.

  16. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Codina, F., E-mail: fvidal@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Nguyen, N.C., E-mail: cuongng@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Oxford (United Kingdom); Peraire, J., E-mail: peraire@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2015-09-15

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  17. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    Science.gov (United States)

    Vidal-Codina, F.; Nguyen, N. C.; Giles, M. B.; Peraire, J.

    2015-09-01

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  18. Sample correlations of infinite variance time series models: an empirical and theoretical study

    Directory of Open Access Journals (Sweden)

    Jason Cohen

    1998-01-01

    Full Text Available When the elements of a stationary ergodic time series have finite variance the sample correlation function converges (with probability 1 to the theoretical correlation function. What happens in the case where the variance is infinite? In certain cases, the sample correlation function converges in probability to a constant, but not always. If within a class of heavy tailed time series the sample correlation functions do not converge to a constant, then more care must be taken in making inferences and in model selection on the basis of sample autocorrelations. We experimented with simulating various heavy tailed stationary sequences in an attempt to understand what causes the sample correlation function to converge or not to converge to a constant. In two new cases, namely the sum of two independent moving averages and a random permutation scheme, we are able to provide theoretical explanations for a random limit of the sample autocorrelation function as the sample grows.

  19. Proportionality between variances in gene expression induced by noise and mutation: consequence of evolutionary robustness

    Directory of Open Access Journals (Sweden)

    Kaneko Kunihiko

    2011-01-01

    Full Text Available Abstract Background Characterization of robustness and plasticity of phenotypes is a basic issue in evolutionary and developmental biology. The robustness and plasticity are concerned with changeability of a biological system against external perturbations. The perturbations are either genetic, i.e., due to mutations in genes in the population, or epigenetic, i.e., due to noise during development or environmental variations. Thus, the variances of phenotypes due to genetic and epigenetic perturbations provide quantitative measures for such changeability during evolution and development, respectively. Results Using numerical models simulating the evolutionary changes in the gene regulation network required to achieve a particular expression pattern, we first confirmed that gene expression dynamics robust to mutation evolved in the presence of a sufficient level of transcriptional noise. Under such conditions, the two types of variances in the gene expression levels, i.e. those due to mutations to the gene regulation network and those due to noise in gene expression dynamics were found to be proportional over a number of genes. The fraction of such genes with a common proportionality coefficient increased with an increase in the robustness of the evolved network. This proportionality was generally confirmed, also under the presence of environmental fluctuations and sexual recombination in diploids, and was explained from an evolutionary robustness hypothesis, in which an evolved robust system suppresses the so-called error catastrophe - the destabilization of the single-peaked distribution in gene expression levels. Experimental evidences for the proportionality of the variances over genes are also discussed. Conclusions The proportionality between the genetic and epigenetic variances of phenotypes implies the correlation between the robustness (or plasticity against genetic changes and against noise in development, and also suggests that

  20. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    Science.gov (United States)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  1. On efficiency of mean-variance based portfolio selection in DC pension schemes

    OpenAIRE

    Elena Vigna

    2010-01-01

    We consider the portfolio selection problem in the accumulation phase of a defined contribution (DC) pension scheme. We solve the mean-variance portfolio selection problem using the embedding technique pioneered by Zhou and Li (2000) and show that it is equivalent to a target-based optimization problem, consisting in the minimization of a quadratic loss function. We support the use of the target-based approach in DC pension funds for three reasons. Firstly, it transforms the difficult problem...

  2. Forecasting the variance and return of Mexican financial series with symmetric GARCH models

    Directory of Open Access Journals (Sweden)

    Fátima Irina VILLALBA PADILLA

    2013-03-01

    Full Text Available The present research shows the application of the generalized autoregresive conditional heteroskedasticity models (GARCH in order to forecast the variance and return of the IPC, the EMBI, the weighted-average government funding rate, the fix exchange rate and the Mexican oil reference, as important tools for investment decisions. Forecasts in-sample and out-of-sample are performed. The covered period involves from 2005 to 2011.

  3. Fertilization success and the estimation of genetic variance in sperm competitiveness.

    Science.gov (United States)

    Garcia-Gonzalez, Francisco; Evans, Jonathan P

    2011-03-01

    A key question in sexual selection is whether the ability of males to fertilize eggs under sperm competition exhibits heritable genetic variation. Addressing this question poses a significant problem, however, because a male's ability to win fertilizations ultimately depends on the competitive ability of rival males. Attempts to partition genetic variance in sperm competitiveness, as estimated from measures of fertilization success, must therefore account for stochastic effects due to the random sampling of rival sperm competitors. In this contribution, we suggest a practical solution to this problem. We advocate the use of simple cross-classified breeding designs for partitioning sources of genetic variance in sperm competitiveness and fertilization success and show how these designs can be used to avoid stochastic effects due to the random sampling of rival sperm competitors. We illustrate the utility of these approaches by simulating various scenarios for estimating genetic parameters in sperm competitiveness, and show that the probability of detecting additive genetic variance in this trait is restored when stochastic effects due to the random sampling of rival sperm competitors are controlled. Our findings have important implications for the study of the evolutionary maintenance of polyandry.

  4. Avaliação de quatro alternativas de análise de experimentos em látice quadrado, quanto à estimação de componentes de variância Evaluation of four alternatives of analysis of experiments in square lattice, with emphasis on estimate of variance component

    Directory of Open Access Journals (Sweden)

    HEYDER DINIZ SILVA

    2000-01-01

    Full Text Available Estudou-se, no presente trabalho, a eficiência das seguintes alternativas de análise de experimentos realizados em látice quanto à precisão na estimação de componentes de variância, através da simulação computacional de dados: i análise intrablocos do látice com tratamentos ajustados (primeira análise; ii análise do látice em blocos casualizados completos (segunda análise; iii análise intrablocos do látice com tratamentos não-ajustados (terceira análise; iv análise do látice como blocos casualizados completos, utilizando as médias ajustadas dos tratamentos, obtidas a partir da análise com recuperação da informação interblocos, tendo como quadrado médio do resíduo a variância efetiva média dessa análise do látice (quarta análise. Os resultados obtidos mostram que se deve utilizar o modelo de análise intrablocos de experimentos em látice para se estimarem componentes de variância sempre que a eficiência relativa do delineamento em látice, em relação ao delineamento em Blocos Completos Casualizados, for superior a 100% e, em caso contrário, deve-se optar pelo modelo de análise em Blocos Casualizados Completos. A quarta alternativa de análise não deve ser recomendada em qualquer das duas situações.The efficiency of fur alternatives of analysis of experiments in square lattice, related to the estimation of variance components, was studied through computational simulation of data: i intrablock analysis of the lattice with adjusted treatments (first analysis; ii lattices analysis as a randomized complete blocks design (second analysis; iii; intrablock analysis of the lattice with non-adjusted treatments (third analysis; iv lattice analysis as a randomized complete blocks design, using the adjusted means of treatments, obtained through the analysis of lattice with recuperation of interblocks information, having as the residual mean square, the average effective variance of this same lattice analysis

  5. Restriction of Variance Interaction Effects and Their Importance for International Business Research

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Nielsen, Bo Bernhard

    2015-01-01

    hypothesis that is very common in international business (IB) research: the restricted variance (RV) hypothesis. Specifically, we describe the nature of an RV interaction and its evidentiary requirements. We also offer several IB examples involving interactions that could have been supported with RV...... arguments. Our hope is that IB researchers can use this paper to bolster their arguments for interaction hypotheses by explaining them in terms of RV....

  6. Increasing genetic variance of body mass index during the Swedish obesity epidemic

    DEFF Research Database (Denmark)

    Rokholm, Benjamin; Silventoinen, Karri; Tynelius, Per;

    2011-01-01

    There is no doubt that the dramatic worldwide increase in obesity prevalence is due to changes in environmental factors. However, twin and family studies suggest that genetic differences are responsible for the major part of the variation in adiposity within populations. Recent studies show...... that the genetic effects on body mass index (BMI) may be stronger when combined with presumed risk factors for obesity. We tested the hypothesis that the genetic variance of BMI has increased during the obesity epidemic....

  7. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    Science.gov (United States)

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).

  8. Statistics of Dark Matter Substructure: III. Halo-to-Halo Variance

    CERN Document Server

    Jiang, Fangzhou

    2016-01-01

    We present a study of unprecedented statistical power regarding the halo-to-halo variance of dark matter substructure. Using a combination of N-body simulations and a semi-analytical model, we investigate the variance in subhalo mass fractions and subhalo occupation numbers, with an emphasis on how these statistics scale with halo formation time. We demonstrate that the subhalo mass fraction, f_sub, is mainly a function of halo formation time, with earlier forming haloes having less substructure. At fixed formation redshift, the average f_sub is virtually independent of halo mass, and the mass dependence of f_sub is therefore mainly a manifestation of more massive haloes assembling later. We compare observational constraints on f_sub from gravitational lensing to our model predictions and simulation results. Although the inferred f_sub are substantially higher than the median LCDM predictions, they fall within the 95th percentile due to halo-to-halo variance. We show that while the halo occupation distributio...

  9. FINITE VARIANCE OF THE NUMBER OF STATIONARY POINTS OF A GAUSSIAN RANDOM FIELD

    OpenAIRE

    Estrade, Anne; Fournier, Julie

    2015-01-01

    Let X be a real-valued stationary Gaussian random field defined on $R^d$ (d ≥ 1), with almost every realization of class $C^2$. This paper is concerned with the random variable giving the number of points in $T$ (a compact set of $R^d$) where the gradient $X'$ takes a fixed value $v\\in R^d$, $N_{X'}(T, v) = \\{t \\in T : X'(t) = v\\}$. More precisely, it deals with the finiteness of the variance of $N_{X'} (T, v)$, under some non-degeneracy hypothesis on $X$. For d = 1, the so-called " Geman con...

  10. Simulation of longitudinal exposure data with variance-covariance structures based on mixed models.

    Science.gov (United States)

    Song, Peng; Xue, Jianping; Li, Zhilin

    2013-03-01

    Longitudinal data are important in exposure and risk assessments, especially for pollutants with long half-lives in the human body and where chronic exposures to current levels in the environment raise concerns for human health effects. It is usually difficult and expensive to obtain large longitudinal data sets for human exposure studies. This article reports a new simulation method to generate longitudinal data with flexible numbers of subjects and days. Mixed models are used to describe the variance-covariance structures of input longitudinal data. Based on estimated model parameters, simulation data are generated with similar statistical characteristics compared to the input data. Three criteria are used to determine similarity: the overall mean and standard deviation, the variance components percentages, and the average autocorrelation coefficients. Upon the discussion of mixed models, a simulation procedure is produced and numerical results are shown through one human exposure study. Simulations of three sets of exposure data successfully meet above criteria. In particular, simulations can always retain correct weights of inter- and intrasubject variances as in the input data. Autocorrelations are also well followed. Compared with other simulation algorithms, this new method stores more information about the input overall distribution so as to satisfy the above multiple criteria for statistical targets. In addition, it generates values from numerous data sources and simulates continuous observed variables better than current data methods. This new method also provides flexible options in both modeling and simulation procedures according to various user requirements.

  11. Estimation of internal organ motion-induced variance in radiation dose in non-gated radiotherapy

    Science.gov (United States)

    Zhou, Sumin; Zhu, Xiaofeng; Zhang, Mutian; Zheng, Dandan; Lei, Yu; Li, Sicong; Bennion, Nathan; Verma, Vivek; Zhen, Weining; Enke, Charles

    2016-12-01

    In the delivery of non-gated radiotherapy (RT), owing to intra-fraction organ motion, a certain degree of RT dose uncertainty is present. Herein, we propose a novel mathematical algorithm to estimate the mean and variance of RT dose that is delivered without gating. These parameters are specific to individual internal organ motion, dependent on individual treatment plans, and relevant to the RT delivery process. This algorithm uses images from a patient’s 4D simulation study to model the actual patient internal organ motion during RT delivery. All necessary dose rate calculations are performed in fixed patient internal organ motion states. The analytical and deterministic formulae of mean and variance in dose from non-gated RT were derived directly via statistical averaging of the calculated dose rate over possible random internal organ motion initial phases, and did not require constructing relevant histograms. All results are expressed in dose rate Fourier transform coefficients for computational efficiency. Exact solutions are provided to simplified, yet still clinically relevant, cases. Results from a volumetric-modulated arc therapy (VMAT) patient case are also presented. The results obtained from our mathematical algorithm can aid clinical decisions by providing information regarding both mean and variance of radiation dose to non-gated patients prior to RT delivery.

  12. CONSTANT ELASTICITY OF VARIANCE MODEL AND ANALYTICAL STRATEGIES FOR ANNUITY CONTRACTS

    Institute of Scientific and Technical Information of China (English)

    XIAO Jian-wu; YIN Shao-hua; QIN Cheng-lin

    2006-01-01

    The constant elasticity of variance(CEV) model was constructed to study a defined contribution pension plan where benefits were paid by annuity. It also presents the process that the Legendre transform and dual theory can be applied to find an optimal investment policy during a participant's whole life in the pension plan. Finally, two explicit solutions to exponential utility function in the two different periods (before and after retirement) are revealed. Hence, the optimal investment strategies in the two periods are obtained.

  13. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    Science.gov (United States)

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.

  14. Estimation of sensible heat, water vapor, and CO2 fluxes using the flux-variance method.

    Science.gov (United States)

    Hsieh, Cheng-I; Lai, Mei-Chun; Hsia, Yue-Joe; Chang, Tsang-Jung

    2008-07-01

    This study investigated the flux-variance relationships of temperature, humidity, and CO(2), and examined the performance of using this method for predicting sensible heat (H), water vapor (LE), and CO(2) fluxes (F(CO2)) with eddy-covariance measured flux data at three different ecosystems: grassland, paddy rice field, and forest. The H and LE estimations were found to be in good agreement with the measurements over the three fields. The prediction accuracy of LE could be improved by around 15% if the predictions were obtained by the flux-variance method in conjunction with measured sensible heat fluxes. Moreover, the paddy rice field was found to be a special case where water vapor follows flux-variance relation better than heat does. However, the CO(2) flux predictions were found to vary from poor to fair among the three sites. This is attributed to the complicated CO(2) sources and sinks distribution. Our results also showed that heat and water vapor were transported with the same efficiency above the grassland and rice paddy. For the forest, heat was transported 20% more efficiently than evapotranspiration.

  15. 基于V/S分析的河川径流长记忆性研究%Study on long memory of river runoff based on rescaled variance analysis

    Institute of Scientific and Technical Information of China (English)

    孙东永; 黄强; 王义民

    2011-01-01

    针对河川径流的复杂波动性,将V/S分析引入到河川径流中进行长记忆性研究。通过V/S分析计算黄河上游兰州站和贵德站年径流序列的Hurst指数,与R/S分析进行比较,并进行稳定性与短期相关性检验,结果表明:兰州站、贵德站年径流序列V/S分析的Hurst指数均大于0.5,表明两站均具有较强的长记忆性,相对于R/S分析不易受短期相关性的影响,是一种稳健有效的分形方法,为河川径流长记忆性分析提供一种新的思路和方法。%To consider complex stochastic and undulant behaviors of river runoff,a method of rescaled-variance(V/S) analysis was applied to a long-memory analysis on the runoffs at the Lanzhou and Guide stations of Yellow River.The calculated Hurst indexes of the annual runoff series were compared with those by other methods,and a test on stability and short-term correlation was conducted.The V/S results shows that the indexes of the two stations are greater than 0.5,indicating a relatively strong long memory.In comparison with rescaled-range(R/S) analysis,V/S analysis is a robust and effective fractal method of runoff long-memory analysis,less vulnerable to the impact of short-term correlation,and this new method provides a new way of thinking.

  16. 40 CFR 142.64 - Variances and exemptions from the requirements of part 141, subpart H-Filtration and Disinfection.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variances and exemptions from the requirements of part 141, subpart H-Filtration and Disinfection. 142.64 Section 142.64 Protection of...—Filtration and Disinfection. (a) No variances from the requirements in part 141, subpart H are permitted....

  17. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    Science.gov (United States)

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results.

  18. Numerical estimation of the noncompartmental pharmacokinetic parameters variance and coefficient of variation of residence times.

    Science.gov (United States)

    Purves, R D

    1994-02-01

    Noncompartmental investigation of the distribution of residence times from concentration-time data requires estimation of the second noncentral moment (AUM2C) as well as the area under the curve (AUC) and the area under the moment curve (AUMC). The accuracy and precision of 12 numerical integration methods for AUM2C were tested on simulated noisy data sets representing bolus, oral, and infusion concentration-time profiles. The root-mean-squared errors given by the best methods were only slightly larger than the corresponding errors in the estimation of AUC and AUMC. AUM2C extrapolated "tail" areas as estimated from a log-linear fit are biased, but the bias is minimized by application of a simple correction factor. The precision of estimates of variance of residence times (VRT) can be severely impaired by the variance of the extrapolated tails. VRT is therefore not a useful parameter unless the tail areas are small or can be shown to be estimated with little error. Estimates of the coefficient of variation of residence times (CVRT) and its square (CV2) are robust in the sense of being little affected by errors in the concentration values. The accuracy of estimates of CVRT obtained by optimum numerical methods is equal to or better than that of AUC and mean residence time estimates, even in data sets with large tail areas.

  19. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  20. Bayesian Variance Component Estimation Using the Inverse-Gamma Class of Priors in a Nested Generalizability Design

    Science.gov (United States)

    Arenson, Ethan A.

    2009-01-01

    One of the problems inherent in variance component estimation centers around inadmissible estimates. Such estimates occur when there is more variability within groups, relative to between groups. This paper suggests a Bayesian approach to resolve inadmissibility by placing noninformative inverse-gamma priors on the variance components, and…

  1. The Relation of Hand and Arm Configuration Variances while Tracking Geometric Figures in Parkinson's Disease: Aspects for Rehabilitation

    Science.gov (United States)

    Keresztenyi, Zoltan; Cesari, Paola; Fazekas, Gabor; Laczko, Jozsef

    2009-01-01

    Variances of drawing arm movements between patients with Parkinson's disease and healthy controls were compared. The aim was to determine whether differences in joint synergies or individual joint rotations affect the endpoint (hand position) variance. Joint and endpoint coordinates were measured while participants performed drawing tasks.…

  2. Spinocerebellar ataxias in the Netherlands - Prevalence and age at onset variance analysis

    NARCIS (Netherlands)

    van de Warrenburg, BPC; Sinke, RJ; Verschuuren-Bemelmans, CC; Scheffer, H; Brunt, ER; Ippel, PF; Maat-Kievit, JA; Dooijes, D; Notermans, NC; Lindhout, D; Knoers, NVAM; Kremer, HPH

    2002-01-01

    Background. International prevalence estimates of autosomal dominant cerebellar ataxias (ADCA) vary from 0.3 to 2.0 per 100,000. The prevalence of ADCA in the Netherlands is unknown. Fifteen genetic loci have been identified (SCA-1-8, SCA-10-14, SCA-16, and SCA-17) and nine of the corresponding gene

  3. Spinocerebellar ataxias in the Netherlands: prevalence and age at onset variance analysis.

    NARCIS (Netherlands)

    Warrenburg, B.P.C. van de; Sinke, R.J.; Verschuuren-Bemelmans, C.C.; Scheffer, H.; Brunt, E.R.; Ippel, P.F.; Maat-Kievit, J.A.; Dooijes, D.; Notermans, S.L.H.; Lindhout, D.; Knoers, N.V.A.M.; Kremer, H.P.H.

    2002-01-01

    BACKGROUND: International prevalence estimates of autosomal dominant cerebellar ataxias (ADCA) vary from 0.3 to 2.0 per 100,000. The prevalence of ADCA in the Netherlands is unknown. Fifteen genetic loci have been identified (SCA-1-8, SCA-10-14, SCA-16, and SCA-17) and nine of the corresponding gene

  4. 基于方差分析的批发分销商销售状况研究%Sales of Wholesale Distributors Based on the Analysis of Variance

    Institute of Scientific and Technical Information of China (English)

    侍冰雪; 朱家明; 魏慧茹; 朱韶东

    2015-01-01

    Aiming at the sales of wholesale distributor , we analyzed the sales data distribution of six major categories of mer-chandise.With the variation coefficient method , we studied the relationship between various types of commodity sales .And with the single factor analysis and the multi factor analysis , we analyzed the effect of different sales areas and sales channels as well as their interaction to the six major categories of merchandise to find out the main commodities that affected sales areas and sales channels respectively .As for the relationship between risk and return , we construct a decision tree to provide business strategy for the whole-sale distributor .%针对批发分销商销售状况,首先分析六大类商品销售额基本分布状况,采用变异系数法研究了各类商品销售额之间的关系;运用单因素、多因素方差分析法分别找出各类商品销售额的主要影响因素及销售途径和销售区域的交互作用对六大类商品销售额的影响;采用成对数据检验法找出了影响销售途径和销售区域的主要商品类别;最后从风险与收益的关系出发,构建决策树,为批发销售商提供经营策略。

  5. Age at onset variance analysis in spinocerebellar ataxias : a study in a Dutch-French cohort

    NARCIS (Netherlands)

    Warrenburg, B.P.C. van de; Hendriks, H.; Durr, A.; Zuijlen, M.C.A. van; Stevanin, G.; Camuzat, A.; Sinke, R.J.; Brice, A.; Kremer, H.P.H.

    2005-01-01

    In dominant spinocerebellar ataxias (SCAs), the issue of whether non-CAG dependent factors contribute to onset age remains unsettled. Data on SCA genotype, onset age, normal/expanded CAG repeat length, sex of the patient and transmitting parent, and family details were available from 802 patients. B

  6. 基于方差分析的中国沿海港口群港口物流能力及区域经济的差异分析%Differential Analysis of Port Logistics Capability and Regional Economy Among China’s Coastal Clusters Based on Analysis of Variance

    Institute of Scientific and Technical Information of China (English)

    肖汉斌; 邓萍; 路世青

    2014-01-01

    Aiming at the imbalance problem of port logistics capability and regional economy in China ’s five coastal economic regions recently ,the differential analysis of port logistics capability and regional economy among China’s coastal clusters was conducted by one-way analysis of variance ,and theoreti-cal implications of the research findings for port policy makers were proposed .In view of the larger subjectivity of confirming factors’ weight to some extent in the research method of indicator system , the structural equation modeling results were used to calculate the capabilities of port logistics and re-gional economy .This method will develop the theoretical research of structural equation modeling .%针对目前中国沿海5大经济区港口物流能力和区域经济发展不平衡问题,运用方差分析对中国沿海5大经济区港口物流能力和区域经济进行了差异分析。考虑到在指标体系的研究方法上,各因素的权重确定可能带有的较大主观性,文中用结构方程模型的研究结果来计算港口物流和区域经济的能力要素,从而为结构方程模型的研究提供了理论延伸。

  7. Exact formulas for the variance of several balance indices under the Yule model.

    Science.gov (United States)

    Cardona, Gabriel; Mir, Arnau; Rosselló, Francesc

    2013-12-01

    One of the main applications of balance indices is in tests of nullmodels of evolutionary processes. The knowledge of an exact formula for a statistic of a balance index, holding for any number n of leaves, is necessary in order to use this statistic in tests of this kind involving trees of any size. In this paper we obtain exact formulas for the variance under the Yule model of the Sackin, the Colless and the total cophenetic indices of binary rooted phylogenetic trees with n leaves.

  8. Characterizing the Variance of Mechanical Properties of Sunflower Bark for Biocomposite Applications

    Directory of Open Access Journals (Sweden)

    Shengnan Sun

    2013-12-01

    Full Text Available Characterizing the variance of material properties of natural fibers is of growing concern due to a wide range of new engineering applications when utilizing these natural fibers. The aim of this study was to evaluate the variance of the Young’s modulus of sunflower bark by (i determining its statistical probability distribution, (ii investigating its relationship with relative humidity, and (iii characterizing its relationship with the specimen extraction location. To this end, specimens were extracted at three different locations along the stems. They were also preconditioned in three different relative humidity environments. The x2-test was used for hypothesis testing with normal, Weibull, and log-normal distributions. Results show that the Young’s modulus follows a normal distribution. Two-sample t-test results reveal that the Young’s modulus of sunflower stem bark strongly depends on the conditioning’s relative humidity and the specimen’s extraction location; it significantly decreased as the relative humidity increased and significantly increased from the bottom to the top of the stem. The correlation coefficients between the Young’s modulus of different relative humidity values and of specimen extraction locations were determined. The calculation of correlation coefficients shows a linear relation between the Young's modulus and the relative humidity for a given location.

  9. 基于最小方差的声速测量实验数据不确定度分析尺度%Experiment Data Uncertainty Analysis Scale of Sound Velocity Measurement Based on the Minimum Variance

    Institute of Scientific and Technical Information of China (English)

    徐仰彬

    2013-01-01

    uncertainty analysis of physical experiment is the key point and difficulty content of the col-lege physics experiment,this paper analyze the wavelength of the uncertainty of measurement by the experiment of velocity measurement,from this we know how the measurement times influence the re-sult of the measurement when the measurement data is unbiased estimation and biased estimate.%物理实验的不确定度分析是大学物理实验中的重点、难点内容。通过对声速测量实验中波长的测量不确定度进行分析,得出测量数据无偏估计和有偏估计时,测量次数对测量结果的影响。

  10. Simultaneous Estimation of Noise Variance and Number of Peaks in Bayesian Spectral Deconvolution

    Science.gov (United States)

    Tokuda, Satoru; Nagata, Kenji; Okada, Masato

    2017-02-01

    The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.

  11. GARCH based artificial neural networks in forecasting conditional variance of stock returns

    Directory of Open Access Journals (Sweden)

    Josip Arnerić

    2014-12-01

    Full Text Available Portfolio managers, option traders and market makers are all interested in volatility forecasting in order to get higher profits or less risky positions. Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most popular models in modelling volatility are GARCH type models because they can account excess kurtosis and asymmetric effects of financial time series. A standard GARCH(1,1 model usually indicates high persistence in the conditional variance, which may originate from structural changes. The first objective of this paper is to develop a parsimonious neural networks (NN model, which can capture the nonlinear relationship between past return innovations and conditional variance. Therefore, the goal is to develop a neural network with an appropriate recurrent connection in the context of nonlinear ARMA models, i.e., the Jordan neural network (JNN. The second objective of this paper is to determine if JNN outperforms the standard GARCH model. Out-of-sample forecasts of the JNN and the GARCH model will be compared to determine their predictive accuracy. The data set consists of returns of the CROBEX index daily closing prices obtained from the Zagreb Stock Exchange. The results indicate that the selected JNN(1,1,1 model has superior performances compared to the standard GARCH(1,1 model. The contribution of this paper can be seen in determining the appropriate NN that is comparable to the standard GARCH(1,1 model and its application in forecasting conditional variance of stock returns. Moreover, from the econometric perspective, NN models are used as a semi-parametric method that combines flexibility of nonparametric methods and the interpretability of parameters of parametric methods.

  12. Variance estimation of modal parameters from output-only and input/output subspace-based system identification

    Science.gov (United States)

    Mellinger, Philippe; Döhler, Michael; Mevel, Laurent

    2016-09-01

    An important step in the operational modal analysis of a structure is to infer on its dynamic behavior through its modal parameters. They can be estimated by various modal identification algorithms that fit a theoretical model to measured data. When output-only data is available, i.e. measured responses of the structure, frequencies, damping ratios and mode shapes can be identified assuming that ambient sources like wind or traffic excite the system sufficiently. When also input data is available, i.e. signals used to excite the structure, input/output identification algorithms are used. The use of input information usually provides better modal estimates in a desired frequency range. While the identification of the modal mass is not considered in this paper, we focus on the estimation of the frequencies, damping ratios and mode shapes, relevant for example for modal analysis during in-flight monitoring of aircrafts. When identifying the modal parameters from noisy measurement data, the information on their uncertainty is most relevant. In this paper, new variance computation schemes for modal parameters are developed for four subspace algorithms, including output-only and input/output methods, as well as data-driven and covariance-driven methods. For the input/output methods, the known inputs are considered as realizations of a stochastic process. Based on Monte Carlo validations, the quality of identification, accuracy of variance estimations and sensor noise robustness are discussed. Finally these algorithms are applied on real measured data obtained during vibrations tests of an aircraft.

  13. Exploiting azimuthal variance of scatterers for multiple-look SAR recognition

    Science.gov (United States)

    Bhanu, Bir; Jones, Grinnell, III

    2002-08-01

    The focus of this paper is optimizing the recognition of vehicles in Synthetic Aperture Radar (SAR) imagery using multiple SAR recognizers at different look angles. The variance of SAR scattering center locations with target azimuth leads to recognition system results at different azimuths that are independent, even for small azimuth deltas. Extensive experimental recognition results are presented in terms of receiver operating characteristic (ROC) curves to show the effects of multiple look angles on recognition performance for MSTAR vehicle targets with configuration variants, articulation, and occlusion.

  14. Effect of captivity on genetic variance for five traits in the large milkweed bug (Oncopeltus fasciatus).

    Science.gov (United States)

    Rodríguez-Clark, K M

    2004-07-01

    Understanding the changes in genetic variance which may occur as populations move from nature into captivity has been considered important when populations in captivity are used as models of wild ones. However, the inherent significance of these changes has not previously been appreciated in a conservation context: are the methods aimed at founding captive populations with gene diversity representative of natural populations likely also to capture representative quantitative genetic variation? Here, I investigate changes in heritability and a less traditional measure, evolvability, between nature and captivity for the large milkweed bug, Oncopeltus fasciatus, to address this question. Founders were collected from a 100-km transect across the north-eastern US, and five traits (wing colour, pronotum colour, wing length, early fecundity and later fecundity) were recorded for founders and for their offspring during two generations in captivity. Analyses reveal significant heritable variation for some life history and morphological traits in both environments, with comparable absolute levels of evolvability across all traits (0-30%). Randomization tests show that while changes in heritability and total phenotypic variance were highly variable, additive genetic variance and evolvability remained stable across the environmental transition in the three morphological traits (changing 1-2% or less), while they declined significantly in the two life-history traits (5-8%). Although it is unclear whether the declines were due to selection or gene-by-environment interactions (or both), such declines do not appear inevitable: captive populations with small numbers of founders may contain substantial amounts of the evolvability found in nature, at least for some traits.

  15. A Mean-Variance Explanation of FDI Flows to Developing Countries

    DEFF Research Database (Denmark)

    Sunesen, Eva Rytter

    An important feature of the world economy is the close global and regional integration due to strong trade and investment relations among countries. The high degree of integration between countries is likely to give rise to business cycle synchronisation in which case shocks will spillover from one...... country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

  16. Determining the Cascade of Passive Scalar Variance in the Lower Stratosphere

    Science.gov (United States)

    Lindborg, Erik; Cho, John Y. N.

    2000-12-01

    Using aircraft data from 7630 commercial flights, we determine the flux of temperature and ozone variance from large to small scales in the lower stratosphere. The relation that we use for this purpose is a form of the classical Yaglom relation [A. M. Yaglom, Dokl. Akad. Nauk SSSR 69, 743 (1949)] for the third-order scalar-velocity structure function. We find that this function is negative and that it depends linearly on separation distance in the mesoscale range for temperature as well as ozone.

  17. Detecting parent of origin and dominant QTL in a two-generation commercial poultry pedigree using variance component methodology

    Directory of Open Access Journals (Sweden)

    Haley Christopher S

    2009-01-01

    Full Text Available Abstract Introduction Variance component QTL methodology was used to analyse three candidate regions on chicken chromosomes 1, 4 and 5 for dominant and parent-of-origin QTL effects. Data were available for bodyweight and conformation score measured at 40 days from a two-generation commercial broiler dam line. One hundred dams were nested in 46 sires with phenotypes and genotypes on 2708 offspring. Linear models were constructed to simultaneously estimate fixed, polygenic and QTL effects. Different genetic models were compared using likelihood ratio test statistics derived from the comparison of full with reduced or null models. Empirical thresholds were derived by permutation analysis. Results Dominant QTL were found for bodyweight on chicken chromosome 4 and for bodyweight and conformation score on chicken chromosome 5. Suggestive evidence for a maternally expressed QTL for bodyweight and conformation score was found on chromosome 1 in a region corresponding to orthologous imprinted regions in the human and mouse. Conclusion Initial results suggest that variance component analysis can be applied within commercial populations for the direct detection of segregating dominant and parent of origin effects.

  18. Numerical solution of continuous-time mean-variance portfolio selection with nonlinear constraints

    Science.gov (United States)

    Yan, Wei; Li, Shurong

    2010-03-01

    An investment problem is considered with dynamic mean-variance (M-V) portfolio criterion under discontinuous prices described by jump-diffusion processes. Some investment strategies are restricted in the study. This M-V portfolio with restrictions can lead to a stochastic optimal control model. The corresponding stochastic Hamilton-Jacobi-Bellman equation of the problem with linear and nonlinear constraints is derived. Numerical algorithms are presented for finding the optimal solution in this article. Finally, a computational experiment is to illustrate the proposed methods by comparing with M-V portfolio problem which does not have any constraints.

  19. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    Science.gov (United States)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  20. Effects of functional group mass variance on vibrational properties and thermal transport in graphene

    Science.gov (United States)

    Lindsay, L.; Kuang, Y.

    2017-03-01

    Intrinsic thermal resistivity critically depends on features of phonon dispersions dictated by harmonic interatomic forces and masses. Here we present the effects of functional group mass variance on vibrational properties and thermal conductivity (κ ) of functionalized graphene from first-principles calculations. We use graphane, a buckled graphene backbone with covalently bonded hydrogen atoms on both sides, as the base material and vary the mass of the hydrogen atoms to simulate the effect of mass variance from other functional groups. We find nonmonotonic behavior of κ with increasing mass of the functional group and an unusual crossover from acoustic-dominated to optic-dominated thermal transport behavior. We connect this crossover to changes in the phonon dispersion with varying mass which suppress acoustic phonon velocities, but also give unusually high velocity optic modes. Further, we show that out-of-plane acoustic vibrations contribute significantly more to thermal transport than in-plane acoustic modes despite breaking of a reflection-symmetry-based scattering selection rule responsible for their large contributions in graphene. This work demonstrates the potential for manipulation and engineering of thermal transport properties in two-dimensional materials toward targeted applications.

  1. The effects of different quantum feedback types on the tightness of the variance-based uncertainty

    Science.gov (United States)

    Zheng, Xiao; Zhang, Guo-Feng

    2017-03-01

    The effect of the quantum feedback on the tightness of the variance-based uncertainty, the possibility of using quantum feedback to prepare the state with a better tightness, and the relationship between the tightness of the uncertainty and the mixedness of the system are studied. It is found that the tightness of Schrodinger-Robertson uncertainty (SUR) relation has a strict liner relationship with the mixedness of the system. As for the Robertson uncertainty relation (RUR), we find that the tightness can be enhanced by tuning the feedback at the beginning of the evolution. In addition, we deduce that the tightness of RUR has an inverse relationship with the mixedness and the relationship turns into a strict linear one when the system reach the steady state.

  2. The explicit dependence of quadrat variance on the ratio of clump size to quadrat size.

    Science.gov (United States)

    Ferrandino, Francis J

    2005-05-01

    ABSTRACT In the past decade, it has become common practice to pool mapped binary epidemic data into quadrats. The resultant "quadrat counts" can then be analyzed by fitting them to a probability distribution (i.e., betabinomial). Often a binary form of Taylor's power law is used to relate the quadrat variance to the quadrat mean. The fact that there is an intrinsic dependence of such analyses on quadrat size and shape is well known. However, a clear-cut exposition of the direct connection between the spatial properties of the two-dimensional pattern of infected plants in terms of the geometry of the quadrat and the results of quadrat-based analyses is lacking. This problem was examined both empirically and analytically. The empirical approach is based on a set of stochastically generated "mock epidemics" using a Neyman-Scott cluster process. The resultant spatial point-patterns of infected plants have a fixed number of disease foci characterized by a known length scale (monodisperse) and saturated to a known disease level. When quadrat samples of these epidemics are fit to a beta-binomial distribution, the resulting measures of aggregation are totally independent of disease incidence and most strongly dependent on the ratio of the length scale of the quadrat to the length scale of spatial aggregation and to a lesser degree on disease saturation within individual foci. For the analytical approach, the mathematical form for the variation in the sum of random variates is coupled to the geometry of a quadrat through an assumed exponential autocorrelation function. The net result is an explicit equation expressing the intraquadrat correlation, quadrat variance, and the index of dispersion in terms of the ratio of the quadrat length scale to the correlative length scale.

  3. Characterising variances of milk powder and instrumentation for the development of a non-targeted, Raman spectroscopy and chemometrics detection method for the evaluation of authenticity.

    Science.gov (United States)

    Karunathilaka, Sanjeewa R; Farris, Samantha; Mossoba, Magdi M; Moore, Jeffrey C; Yakes, Betsy Jean

    2016-06-01

    There is a need to develop rapid tools to screen milk products for economically motivated adulteration. An understanding of the physiochemical variability within skim milk powder (SMP) and non-fat dry milk (NFDM) is the key to establishing the natural differences of these commodities prior to the development of non-targeted detection methods. This study explored the sources of variance in 71 commercial SMP and NFDM samples using Raman spectroscopy and principal component analysis (PCA) and characterised the largest number of commercial milk powders acquired from a broad number of international manufacturers. Spectral pre-processing using a gap-segment derivative transformation (gap size = 5, segment width = 9, fourth derivative) in combination with sample normalisation was necessary to reduce the fluorescence background of the milk powder samples. PC scores plots revealed no clear trends for various parameters, including day of analysis, powder type, supplier and processing temperatures, while the largest variance was due to irreproducibility in sample positioning. Significant chemical sources of variances were explained by using the spectral features in the PC loadings plots where four samples from the same manufacturer were determined to likely contain an additional component or lactose anomers, and one additional sample was identified as an outlier and likely containing an adulterant or differing quality components. The variance study discussed herein with this large, diverse set of milk powders holds promise for future use as a non-targeted screening method that could be applied to commercial milk powders.

  4. Temporal and Spatial Turbulent Spectra of MHD Plasma and an Observation of Variance Anisotropy

    CERN Document Server

    Schaffner, D A; Lukin, V S

    2014-01-01

    The nature of MHD turbulence is analyzed through both temporal and spatial magnetic fluctuation spectra. A magnetically turbulent plasma is produced in the MHD wind-tunnel configuration of the Swarthmore Spheromak Experiment (SSX). The power of magnetic fluctuations is projected into directions perpendicular and parallel to a local mean field; the ratio of these quantities shows the presence of variance anisotropy which varies as a function of frequency. Comparison amongst magnetic, velocity, and density spectra are also made, demonstrating that the energy of the turbulence observed is primarily seeded by magnetic fields created during plasma production. Direct spatial spectra are constructed using multi-channel diagnostics and are used to compare to frequency spectra converted to spatial scales using the Taylor Hypothesis. Evidence for the observation of dissipation due to ion inertial length scale physics is also discussed as well as the role laboratory experiment can play in understanding turbulence typica...

  5. Evidence of reduced mid-Holocene ENSO variance on the Great Barrier Reef, Australia

    Science.gov (United States)

    Leonard, N. D.; Welsh, K. J.; Lough, J. M.; Feng, Y.-x.; Pandolfi, J. M.; Clark, T. R.; Zhao, J.-x.

    2016-09-01

    Globally, coral reefs are under increasing pressure both through direct anthropogenic influence and increases in climate extremes. Understanding past climate dynamics that negatively affected coral reef growth is imperative for both improving management strategies and for modeling coral reef responses to a changing climate. The El Niño-Southern Oscillation (ENSO) is the primary source of climate variability at interannual timescales on the Great Barrier Reef (GBR), northeastern Australia. Applying continuous wavelet transforms to visually assessed coral luminescence intensity in massive Porites corals from the central GBR we demonstrate that these records reliably reproduce ENSO variance patterns for the period 1880-1985. We then applied this method to three subfossil corals from the same reef to reconstruct ENSO variance from ~5200 to 4300 years before present (yBP). We show that ENSO events were less extreme and less frequent after ~5200 yBP on the GBR compared to modern records. Growth characteristics of the corals are consistent with cooler sea surface temperatures (SSTs) between 5200 and 4300 yBP compared to both the millennia prior (~6000 yBP) and modern records. Understanding ENSO dynamics in response to SST variability at geological timescales will be important for improving predictions of future ENSO response to a rapidly warming climate.

  6. An Empirical Analysis on Value-variance Model and Value-Semi-Variance Model%均值—方差模型与均值—半方差模型的实证分析

    Institute of Scientific and Technical Information of China (English)

    李晓; 李红丽

    2011-01-01

    在马科维茨均值—方差模型中,风险即是期望收益率的不确定性,并用资产组合收益率的方差定量地来刻画风险。然而,投资者在实际投资活动中,只有当期望收益率低于其预想的收益水平时,才认为是风险,否则不认为是风险。于是,就引出用半方差刻画风险的另一种风险度量方法。文章通过选择适当的股票组合,对方差和半方差这两种不同的风险度量方法进行对比研究,结果表明,在风险水平相同情况下,均值—半方差模型可以使我们获得更高的期望收益率。%In the Markowitz value-variance model,the risk for the expected rate of return to understand the uncertainty,so ground-breaking use of Markowitz portfolio yield variance(or standard deviation) to characterize quantitatively these types of uncertainty.Markowitz's portfolio theory and its model to become the beginning of modern finance.However,the actual investment of investors in its activities,often with a different understanding of risk,that is,only when the expected rate of return below the expected level of returns,the only risk that is otherwise the risk is not considered.Thus,the characterization leads to the risk of semi-variance with another method of risk measurement.This article by selecting the appropriate portfolio of shares,the other poor and semi-variance of these two different methods of risk measure comparative study,results showed that the risk level in the same circumstances,the mean-semi-variance model allows us to obtain higher expected rate of return.

  7. Disentangling the Common Variance of Perfectionistic Strivings and Perfectionistic Concerns: A Bifactor Model of Perfectionism.

    Science.gov (United States)

    Gäde, Jana C; Schermelleh-Engel, Karin; Klein, Andreas G

    2017-01-01

    Perfectionism nowadays is frequently understood as a multidimensional personality trait with two higher-order dimensions of perfectionistic strivings and perfectionistic concerns. While perfectionistic concerns are robustly found to correlate with negative outcomes and psychological malfunctioning, findings concerning the outcomes of perfectionistic strivings are inconsistent. There is evidence that perfectionistic strivings relate to psychological maladjustment on the one hand but to positive outcomes on the other hand as well. Moreover, perfectionistic strivings and perfectionistic concerns frequently showed substantial overlap. These inconsistencies of differential relations and the substantial overlap of perfectionistic strivings and perfectionistic concerns raise questions concerning the factorial structure of perfectionism and the meaning of its dimensions. In this study, several bifactor models were applied to disentangle the common variance of perfectionistic strivings and perfectionistic concerns at the item level using Hill et al.'s (2004) Perfectionism Inventory (PI). The PI measures a broad range of perfectionism dimensions by four perfectionistic strivings and four perfectionistic concerns subscales. The bifactor-(S - 1) model with one general factor defined by concern over mistakes as the reference facet, four specific perfectionistic strivings factors, and three specific perfectionistic concerns factors showed acceptable fit. The results revealed a clear separation between perfectionistic strivings and perfectionistic concerns, as the general factor represented concern over mistakes, while the perfectionistic strivings factors each explained a substantial amount of reliable variance independent of the general factor. As a result, factor scores of the specific perfectionistic strivings factors and the general factor had differential relationships with achievement motivation, neuroticism, conscientiousness, and self-efficacy that met with theoretical

  8. Disentangling the Common Variance of Perfectionistic Strivings and Perfectionistic Concerns: A Bifactor Model of Perfectionism

    Science.gov (United States)

    Gäde, Jana C.; Schermelleh-Engel, Karin; Klein, Andreas G.

    2017-01-01

    Perfectionism nowadays is frequently understood as a multidimensional personality trait with two higher-order dimensions of perfectionistic strivings and perfectionistic concerns. While perfectionistic concerns are robustly found to correlate with negative outcomes and psychological malfunctioning, findings concerning the outcomes of perfectionistic strivings are inconsistent. There is evidence that perfectionistic strivings relate to psychological maladjustment on the one hand but to positive outcomes on the other hand as well. Moreover, perfectionistic strivings and perfectionistic concerns frequently showed substantial overlap. These inconsistencies of differential relations and the substantial overlap of perfectionistic strivings and perfectionistic concerns raise questions concerning the factorial structure of perfectionism and the meaning of its dimensions. In this study, several bifactor models were applied to disentangle the common variance of perfectionistic strivings and perfectionistic concerns at the item level using Hill et al.’s (2004) Perfectionism Inventory (PI). The PI measures a broad range of perfectionism dimensions by four perfectionistic strivings and four perfectionistic concerns subscales. The bifactor-(S – 1) model with one general factor defined by concern over mistakes as the reference facet, four specific perfectionistic strivings factors, and three specific perfectionistic concerns factors showed acceptable fit. The results revealed a clear separation between perfectionistic strivings and perfectionistic concerns, as the general factor represented concern over mistakes, while the perfectionistic strivings factors each explained a substantial amount of reliable variance independent of the general factor. As a result, factor scores of the specific perfectionistic strivings factors and the general factor had differential relationships with achievement motivation, neuroticism, conscientiousness, and self-efficacy that met with

  9. The variance of length of stay and the optimal DRG outlier payments.

    Science.gov (United States)

    Felder, Stefan

    2009-09-01

    Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.

  10. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    Science.gov (United States)

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-01-01

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

  11. Reconciling patterns of inter-ocean molecular variance from four classes of molecular markers in blue marlin (Makaira nigricans).

    Science.gov (United States)

    Buonaccorsi, V P; McDowell, J R; Graves, J E

    2001-05-01

    Different classes of molecular markers occasionally yield discordant views of population structure within a species. Here, we examine the distribution of molecular variance from 14 polymorphic loci comprising four classes of molecular markers within approximately 400 blue marlin individuals (Makaira nigricans). Samples were collected from the Atlantic and Pacific Oceans over 5 years. Data from five hypervariable tetranucleotide microsatellite loci and restriction fragment length polymorphism (RFLP) analysis of whole molecule mitochondrial DNA (mtDNA) were reported and compared with previous analyses of allozyme and single-copy nuclear DNA (scnDNA) loci. Temporal variance in allele frequencies was nonsignificant in nearly all cases. Mitochondrial and microsatellite loci revealed striking phylogeographic partitioning among Atlantic and Pacific Ocean samples. A large cluster of alleles was present almost exclusively in Atlantic individuals at one microsatellite locus and for mtDNA, suggesting that, if gene flow occurs, it is likely to be unidirectional from Pacific to Atlantic oceans. Mitochondrial DNA inter-ocean divergence (FST) was almost four times greater than microsatellite or combined nuclear divergences including allozyme and scnDNA markers. Estimates of Neu varied by five orders of magnitude among marker classes. Using mathematical and computer simulation approaches, we show that substantially different distributions of FST are expected from marker classes that differ in mode of inheritance and rate of mutation, without influence of natural selection or sex-biased dispersal. Furthermore, divergent FST values can be reconciled by quantifying the balance between genetic drift, mutation and migration. These results illustrate the usefulness of a mitochondrial analysis of population history, and relative precision of nuclear estimates of gene flow based on a mean of several loci.

  12. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  13. Variance as a Leading Indicator of Regime Shift in Ecosystem Services

    Directory of Open Access Journals (Sweden)

    William A. Brock

    2006-12-01

    Full Text Available Many environmental conflicts involve pollutants such as greenhouse gas emissions that are dispersed through space and cause losses of ecosystem services. As pollutant emissions rise in one place, a spatial cascade of declining ecosystem services can spread across a larger landscape because of the dispersion of the pollutant. This paper considers the problem of anticipating such spatial regime shifts by monitoring time series of the pollutant or associated ecosystem services. Using such data, it is possible to construct indicators that rise sharply in advance of regime shifts. Specifically, the maximum eigenvalue of the variance-covariance matrix of the multivariate time series of pollutants and ecosystem services rises prior to the regime shift. No specific knowledge of the mechanisms underlying the regime shift is needed to construct the indicator. Such leading indicators of regime shifts could provide useful signals to management agencies or to investors in ecosystem service markets.

  14. Huber's Minimax Approach in Distribution Classes with Bounded Variances and Subranges with Applications to Robust Detection of Signals

    Institute of Scientific and Technical Information of China (English)

    Georgy Shevlyakov; Kiseon Kim

    2005-01-01

    A brief survey of former and recent results on Huber's minimax approach in robust statistics is given. The least informative distributions minimizing Fisher information for location over several distribution classes with upper-bounded variances and subranges are written down. These least informative distributions are qualitatively different from classical Huber's solution and have the following common structure: (i) with relatively small variances they are short-tailed, in particular normal; (ii) with relatively large variances they are heavytailed, in particular the Laplace; (iii) they are compromise with relatively moderate variances. These results allow to raise the efficiency of minimax robust procedures retaining high stability as compared to classical Huber's procedure for contaminated normal populations. In application to signal detection problems, the proposed minimax detection rule has proved to be robust and close to Huber's for heavy-tailed distributions and more efficient than Huber's for short-tailed ones both in asymptotics and on finite samples.

  15. Comparative self-concept variances of school children in two English-speaking West African nations.

    Science.gov (United States)

    Alawiye, O; Alawiye, C Z; Thomas, J I

    1990-03-01

    This study examined the self-concepts of elementary school children in Grades 2, 4, 6, and 8, from two West African nations, Ghana and Gambia. Measures of self-concept in the areas of physical maturity, peer relations, academic success, and school adaptiveness were obtained from 195 Ghanaian and 156 Gambian students. The mean scores of the students were subjected to a series of three-way analyses of variance (ANOVAs). The independent variables were sex, grade level, and nationality. The overall analyses revealed grade level as the most potent variable in the self-concept development of both groups, whereas the sex variable indicated interaction with grade level only in Gambian children. The self-esteem of the children in both nations in the areas of physical maturity, peer relations, and academic success was relatively high and stable. Self-concept developmental patterns showed differences across grade levels in the four self-concept areas being tested.

  16. 不同抗生素检测方法检测乳清粉样品结果差异性分析%Analysis of the Variance Result of Different Antibiotics Detection Methods in Whey Powder

    Institute of Scientific and Technical Information of China (English)

    王媛媛; 王攀; 刘晓辉; 赵亭亭; 杨婧悦; 于景华; 姜中航; 骆志刚

    2015-01-01

    Whey powder is one of the main material of infant formula milk powder, and the detection of antibiotics in whey powder is an important reference index of the accepptance of material. Four distinguished antibiotics detection methods were used in the research, to test eight batches of whey powder, including method 1 and method 2 of national standard, ECLIPSE 50 and ELISA kit. Signiifcant difference was observed between the results of different test methods, among which the national standard method 1 and ELISA test yeild the same result, i.e. al negative, whereas national standard method 2 and ECLIPSE 50 kit, which are based on the same testing mechanism showed ful positive and partialy positive respectively. Consequently, the testing stability of national standard method 2 is not satisfying for some speciifc materials and further study is needed on the cause of this difference.%乳清粉是婴幼儿配方乳粉行业的主要原料之一,对其进行抗生素检测是原料验收的重要参考指标。本研究针对8个批次乳清粉原料,采用4种不同的抗生素检测方法进行了抗生素检测,包括国标法1、国标法2、利普斯50抗生素检测试剂盒法、Charm MRL酶联免疫试剂盒法。不同方法的检测结果存在很大差异,其中国标法1和酶联免疫试剂盒法获得了相同的结果,8个样品均为抗生素阴性。而国标法2及依据此法开发的快速检测试剂盒利普斯50检测则分别显示全阳性和部分阳性的矛盾结果。从上述结果可见,依据国标法2通过嗜热芽孢杆菌检测乳清粉产品抗生素,其结果稳定性欠佳,影响其检测结果可靠性的因素还有待进一步研究。

  17. Very low levels of direct additive genetic variance in fitness and fitness components in a red squirrel population.

    Science.gov (United States)

    McFarlane, S Eryn; Gorrell, Jamieson C; Coltman, David W; Humphries, Murray M; Boutin, Stan; McAdam, Andrew G

    2014-05-01

    A trait must genetically correlate with fitness in order to evolve in response to natural selection, but theory suggests that strong directional selection should erode additive genetic variance in fitness and limit future evolutionary potential. Balancing selection has been proposed as a mechanism that could maintain genetic variance if fitness components trade off with one another and has been invoked to account for empirical observations of higher levels of additive genetic variance in fitness components than would be expected from mutation-selection balance. Here, we used a long-term study of an individually marked population of North American red squirrels (Tamiasciurus hudsonicus) to look for evidence of (1) additive genetic variance in lifetime reproductive success and (2) fitness trade-offs between fitness components, such as male and female fitness or fitness in high- and low-resource environments. "Animal model" analyses of a multigenerational pedigree revealed modest maternal effects on fitness, but very low levels of additive genetic variance in lifetime reproductive success overall as well as fitness measures within each sex and environment. It therefore appears that there are very low levels of direct genetic variance in fitness and fitness components in red squirrels to facilitate contemporary adaptation in this population.

  18. The Analysis of Group Variance About Rural Migrant Workers' Quality of Life%农民工城市生活质量的群体差异性分析

    Institute of Scientific and Technical Information of China (English)

    冯华; 崔政

    2011-01-01

    农民工在城市的生活质量水平是该阶层在城市生活的各个方面的综合反映,本研究发现当前农民工在城市的生活质量非常低下。而在影响农民工生活质量的诸多因素中,除了外在的政治、经济、社会及文化层面的因素以外,农民工自身的性别、受教育水平、职业差异性也是造成其总体生活质量低下及内部非均衡性的重要因素,因此消除因农民工的个体因素给其带来的生活质量的影响,可以通过加大非学历教育力度及提高低端行业薪资水平等方法来实现。%The rural migrant workers' quality of life is a comprehensive reference on the migrant workers class' life in cities and from our research we can find that the rural migrant workers' quality of life in cities is very low. These factors , besides the externalities such as political, economical , social and cultural, there are other internal factors that can bring about poverty and imbalance of quality of life , these factors include the rural migrant workers' gender, the level of education experience, occupational differences and so on. Therefore, we should develop the non -formal education and raise the wage level in the low -end sector in order to eliminate the influence on the quality of life resulting from individual factors.

  19. An Analysis of College Students'Individual Career Outlook Variance in the Context of Social Stratification%社会分层背景下大学生个体择业观差异分析

    Institute of Scientific and Technical Information of China (English)

    聂玮

    2014-01-01

    Concept of occupation is an important factor influencing the career choice of college students . Different concepts will have different effects on the career choice .Under the social stratification ,college students'employment concepts are different to some degree .As for its reason ,besides the influence of the social and cultural environment , it is influenced by the students' ascribed status that is college students'family's social class .Because the resource extent in all sectors of society has its own range , the college students have many differences in social capital ,the parents education level ,occupation and income level have significant effects on the opportunity of the college students of accepting the higher education and the process of education ,w hich further affects their personal values ,and w hich deter-mines their value judgment of occupation choice .%择业观是影响大学生进行职业选择的重要因素,不同的择业观对职业选择产生不同的影响。社会分层背景下大学生个体择业观有着不同程度的差异。究其原因,除社会文化大环境的影响之外,还受到大学生先赋地位即大学生家庭所在的社会阶层的影响。由于社会各阶层自身拥有的资源程度不等,使得大学生在社会资本存量上有着不少差异,其中家长的受教育水平、职业和经济收入水平,对大学生接受高等教育的机会获得和教育过程存在着较大的影响,这也进一步影响着他们个人价值观的形成,从而决定着他们职业选择的价值判断。

  20. Variances and covariances in the Central Limit Theorem for the output of a transducer

    Science.gov (United States)

    Heuberger, Clemens; Kropf, Sara; Wagner, Stephan

    2015-01-01

    We study the joint distribution of the input sum and the output sum of a deterministic transducer. Here, the input of this finite-state machine is a uniformly distributed random sequence. We give a simple combinatorial characterization of transducers for which the output sum has bounded variance, and we also provide algebraic and combinatorial characterizations of transducers for which the covariance of input and output sum is bounded, so that the two are asymptotically independent. Our results are illustrated by several examples, such as transducers that count specific blocks in the binary expansion, the transducer that computes the Gray code, or the transducer that computes the Hamming weight of the width-w non-adjacent form digit expansion. The latter two turn out to be examples of asymptotic independence. PMID:27087727

  1. The material variance of the Dead Sea Scrolls: On texts and artefacts

    Directory of Open Access Journals (Sweden)

    Eibert Tigchelaar

    2016-05-01

    Full Text Available What does a sacred text look like? Are religious books materially different from other books? Does materiality matter? This article deals with three different aspects of material variance attested amongst the Dead Sea Scrolls, Ancient Jewish religious text fragments, of which were found in the Judean Desert. I suggest that the substitution of the ancient Hebrew script by the everyday Aramaic script, also for Torah and other religious texts, was intentional and programmatic: it enabled the broader diffusion of scriptures in Hellenistic and Roman Judea. The preponderant use of parchment for religious texts rather than papyrus may be a marker of identity. The many small scrolls which contained only small parts of specific religious books (Genesis, Psalms may have been produced as religious artefacts which express identity in the period when Judaism developed into a religion of the book. Keywords: Dead Sea Scrolls; Judaism; Manuscripts

  2. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Directory of Open Access Journals (Sweden)

    Ashton M Verdery

    Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  3. Modelling changes in the unconditional variance of long stock return series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    2014-01-01

    In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long daily return series. For this purpose we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta...... (2012, 2013). The latter component is modelled such that the unconditional time-varying component evolves slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building procedure...... that the apparent long memory property in volatility may be interpreted as changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecasting accuracy of the new model over the GJR-GARCH model at all horizons for eight...

  4. Knowledge extraction algorithm for variances handling of CP using integrated hybrid genetic double multi-group cooperative PSO and DPSO.

    Science.gov (United States)

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2012-04-01

    Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling.

  5. Variance and covariance components for liability of piglet survival during different periods

    DEFF Research Database (Denmark)

    Su, G; Sorensen, D; Lund, M S

    2008-01-01

    Variance and covariance components for piglet survival in different periods were estimated from individual records of 133 004 Danish Landrace piglets and 89 928 Danish Yorkshire piglets, using a liability threshold model including both direct and maternal additive genetic effects. At the individual...... piglet level, the estimates of direct heritability in Landrace were 0.035, 0.057 and 0.027, and in Yorkshire the estimates were 0.012, 0.030 and 0.025 for liability of survival at farrowing (SVB), from birth to day 5 (SV5) and from day 6 to weaning (SVW), respectively. The estimates of maternal...... between SVB and SV5 and between SV5 and SVW in Landrace. Direct and maternal genetic correlations between piglet birth weight (BW) and SV5 were moderately high, but the correlations between BW and SVB and between BW and SVW were low and most of them were not significantly different from zero...

  6. Similarities and variances in perception of professionalism among Saudi and Egyptian Medical Students

    Science.gov (United States)

    Sattar, Kamran; Roff, Sue; Meo, Sultan Ayoub

    2016-01-01

    Background & Objective: Professionalism has a number of culturally specific elements, therefore, it is imperative to identify areas of congruence and variations in the behaviors in which professionalism is understood in different countries. This study aimed to explore and compare the recommendation of sanctions by medical students of College of Medicine, King Saud University (KSU), Riyadh, Saudi Arabia and students from three medical colleges in Egypt. Methods: The responses were recorded using an anonymous, self-administered survey “ Dundee Polyprofessionalism Inventory I: Academic Integrity”. In the study 750 medical students of College of Medicine, KSU, Riyadh were invited and a questionnaire was electronically sent. They rated the importance of professionalism lapses by choosing from a hierarchical menu of sanctions for first time lapses with no justifying circumstances. These responses were compared with published data from 219 students from three medical schools in Egypt. Results: We found variance for 23 (76.66%) behaviors such as “physically assaulting a university employee or student” and “plagiarizing work from a fellow student or publications/internet”. We also found similarities for 7 (23.33%) behaviors including “lack of punctuality for classes” and drinking alcohol over lunch and interviewing a patient in the afternoon”, when comparing the median recommended sanctions from medical students in Saudi Arabia and Egypt. Conclusion: There are more variances than congruence regarding perceptions of professionalism between the two cohorts. The students at KSU were also found to recommend the sanction of “ignore” for a behavior, a response, which otherwise was absent from Egyptian cohort. PMID:28083032

  7. COMPARISON OF VARIANCE ESTIMATORS FOR THE RATIO ESTIMATOR BASED ON SMALL SAMPLE

    Institute of Scientific and Technical Information of China (English)

    秦怀振; 李莉莉

    2001-01-01

    This paper sheds light on an open problem put forward by Cochran[1]. The comparison between two commonly used variance estimators v1(R) and v2(R ) of the ratio estimator for population ratio R from small sample selected by simple random sampling is made following the idea of the estimated loss approach (See [2]). Considering the superpopulation model under which the ratio estimator -YR for population mean -Y is the best linear unbiased one, the necessary and sufficient conditions for v1(R)u(-)v2(R) and v2(R)u(-) v1(R) are obtained with ignored the sampling fraction f. For a substantial f, several rigorous sufficient conditions for v2(R)u(-)v1(R) are derived.

  8. 临床康复与竞技运动损伤康复的差异分析%Variance Analysis of Clinical Rehabilitation and Competitive Sports Injury Rehabilitation

    Institute of Scientific and Technical Information of China (English)

    张鹏

    2011-01-01

    介绍临床康复医学的概念、范畴,并与竞技运动损伤在康复目标、工作内容、工作方式与流程以及人员配备等方面的异同进行对比分析,为促进竞技体育运动损伤康复工作的发展提供借鉴。%The paper introduces the concept and category of clinical rehabilitation and makes a compari- son between the targets, contents, ways, procedures and personnel staffing of clinical rehabilitation and competitive sports injury rehabilitation. It aims to provide reference for improving the development of competitive sports injury rehabilitation.

  9. The Parabolic variance (PVAR), a wavelet variance based on least-square fit

    CERN Document Server

    Vernotte, F; Bourgeois, P -Y; Rubiola, E

    2015-01-01

    The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...

  10. More than Drought: Precipitation Variance, Excessive Wetness, Pathogens and the Future of the Western Edge of the Eastern Deciduous Forest.

    Science.gov (United States)

    Hubbart, Jason A; Guyette, Richard; Muzika, Rose-Marie

    2016-10-01

    For many regions of the Earth, anthropogenic climate change is expected to result in increasingly divergent climate extremes. However, little is known about how increasing climate variance may affect ecosystem productivity. Forest ecosystems may be particularly susceptible to this problem considering the complex organizational structure of specialized species niche adaptations. Forest decline is often attributable to multiple stressors including prolonged heat, wildfire and insect outbreaks. These disturbances, often categorized as megadisturbances, can push temperate forests beyond sustainability thresholds. Absent from much of the contemporary forest health literature, however, is the discussion of excessive precipitation that may affect other disturbances synergistically or that might represent a principal stressor. Here, specific points of evidence are provided including historic climatology, variance predictions from global change modeling, Midwestern paleo climate data, local climate influences on net ecosystem exchange and productivity, and pathogen influences on oak mortality. Data sources reveal potential trends, deserving further investigation, indicating that the western edge of the Eastern Deciduous forest may be impacted by ongoing increased precipitation, precipitation variance and excessive wetness. Data presented, in conjunction with recent regional forest health concerns, suggest that climate variance including drought and excessive wetness should be equally considered for forest ecosystem resilience against increasingly dynamic climate. This communication serves as an alert to the need for studies on potential impacts of increasing climate variance and excessive wetness in forest ecosystem health and productivity in the Midwest US and similar forest ecosystems globally.

  11. 我国旅游资源经济转化率及其省际差异分析%On Analysis of Tourism Resources Economic Conversion Rate and Inter-Provincial Variance in China

    Institute of Scientific and Technical Information of China (English)

    白洋; 杨晓霞; 樊昊

    2015-01-01

    旅游资源经济转化率是衡量旅游资源开发利用水平的重要指标。在对旅游资源丰裕度、旅游经济综合发展水平量化估值的基础上,尝试建立旅游资源经济转化率模型,以2012年的截面数据对全国各省级行政区的旅游资源经济转化率进行定量测算与分析,结果表明:①我国旅游资源经济转化率总体水平偏低且省际差异明显;②我国省级行政区旅游资源经济转化率可划分为高、中、低3个等级,其中高等级包含2个省级行政区,中等级包含10个省级行政区,低等级包含19个省级行政区;③东北、西北地区只拥有低转化率等级的省级行政区;华北、西南地区拥有中、低转化率等级的省级行政区;华东、中南地区同时拥有高、中、低转化率等级的省级行政区。%Tourism resources economic conversion rate is an important indicator to measure tourism re‐sources development and utilization level .T his study on the basis of the quantitative valuation of tourism resources abundance and tourism economy comprehensive development level attempts to establish the tourism resources of economic conversion model ,quantitatively estimates the 2012 China's provincial ad‐ministrative region tourism resources economic conversion rate and analyzes the difference .Results show that 1) The overall level of tourism resources economic conversion rate of China is low ,and difference to be markedly ;2)T he provincial administrative region tourism resources economic conversion can be divided into high ,medium and low three grades ,high‐grade contains two provincial administrative region ,mid‐range contains 10 provincial administrative region ,lower level contains 19 provincial administrative region , which contains high grade 2 provincial level administrative region ,the intermediate contains 10 provincial‐level administrative region ,lower level contains 19 provincial administrative region

  12. 对护患双方护理风险告知制度认知的调查及差异分析%Awareness survey and variance analysis of risk informed system between nurses and patients

    Institute of Scientific and Technical Information of China (English)

    陈婷; 权帅; 杨银玉

    2014-01-01

    Objective To research the status of awareness to the risk informed system and compare the differences between nurses and patients .Thus, we can provide an identifiable ground to improve the system and the implementation capacity among nurses .Methods Self-designed questionnaire about informed nursing risks were distributed to 245 nurses and 200 patients.Results A total of 445 questionnaires were distributed , and 445 questionnaires were returned , with an effective rate of 100%.All the nurses took a supportive attitude on the risk informed system.65.00%nurses believed that the system had a positive impact on the nurse-patients relationship.97.14%nurses thought that the risk should be informed by the charge nurse .As for the patients, 98.0% patients thought that there were risks during nursing process , so the healthcare professionals should inform the risks.41.5% patients read the notification before sign it .There were significant differences in the informed place, method, object, subject and whether they can read the notification and take the initiative to ask questions and cooperate (P<0.05).Conclusions There was significant difference in the awareness of risk informed system between nurses and patients .Hospital administrators should fully consider the importance of training the nurses’ knowledge and ability, and they should develop standardized nurses ’ risk informed system. The nurses should pay attention to the patients ’ requirements in order to develop the form of inform method . Finally, we can establish an effective nursing risk prevention barrier .%目的:调查了解护患双方对执行护理风险告知制度的认知状况,为进一步完善此制度,提高护士执行力,争取患者支持提供依据。方法采用自行设计的护理风险告知问卷,对245名护士和200例患者进行调查。结果共发放问卷445份,回收有效问卷445份,回收有效率为100%。其中护士问卷245份,患者问卷200份。245

  13. Impact of ionospheric scintillation on GNSS receiver tracking performance over Latin America: Introducing the concept of tracking jitter variance maps

    Science.gov (United States)

    Sreeja, V.; Aquino, M.; Elmas, Z. G.

    2011-10-01

    Scintillations are rapid fluctuations in the phase and amplitude of transionospheric radio signals caused by small-scale ionospheric plasma density irregularities. In the case of Global Navigation Satellite System (GNSS) receivers, scintillations can cause cycle slips, degrade the positioning accuracy and when severe enough can even lead to complete loss of signal lock. This study presents for the first time an assessment of GNSS receiver signal tracking performance under scintillating conditions, by the analysis of receiver phase lock loop (PLL) jitter variance maps. These maps can potentially assist users when faced with such conditions; a potential application envisaged for these maps would be in the form of a tool to provide users with information about "current (or expected, if some sort of prediction can be developed in follow on research) tracking conditions" under scintillation; another possibility would be to use the technique described by Aquino et al. (2009) to mitigate against the effects of ionospheric scintillation. In this paper these maps were constructed for scintillation events that were observed in the field during 9-11 March 2011 over Presidente Prudente (22.1°S, 51.4°W, dip latitude ˜12.3°S) in Brazil, a location close to the Equatorial Ionisation Anomaly (EIA) crest in Latin America. Results show that the jitter variances estimated for all the simultaneously observed satellite-to-receiver links during the premidnight hours on 9 and 11 March 2011 increase during the enhanced scintillation levels, indicating the likelihood for cycle slips, loss of signal lock, and degraded accuracy in the observations.

  14. Using variances in hydrocarbon concentration and carbon stable isotope to determine the important influence of irrigated water on petroleum accumulation in surface soil.

    Science.gov (United States)

    Zhang, Juan; Wang, Renqing; Yang, Juncheng; Hou, Hong; Du, Xiaoming; Dai, Jiulan

    2013-05-01

    Hunpu is a wastewater-irrigated area southwest of Shenyang. To evaluate petroleum contamination and identify its sources at the area, the aliphatic hydrocarbons and compound-specific carbon stable isotopes of n-alkanes in the soil, irrigation water, and atmospheric deposition were analyzed. The analyses of hydrocarbon concentrations and geochemical characteristics reveal that the water is moderately contaminated by degraded heavy oil. According to the isotope analysis, inputs of modern C3 plants and degraded petroleum are present in the water, air, and soil. The similarities and dissimilarities among the water, air, and soil samples were determined by concentration, isotope, and multivariate statistical analyses. Hydrocarbons from various sources, as well as the water/atmospheric deposition samples, are more effectively differentiated through principal component analysis of carbon stable isotope ratios (δ(13)C) relative to hydrocarbon concentrations. Redundancy analysis indicates that 57.1 % of the variance in the δ(13)C of the soil can be explained by the δ(13)C of both the water and air, and 35.5 % of the variance in the hydrocarbon concentrations of the soil can be explained by hydrocarbon concentrations of both the water and the air. The δ(13)C in the atmospheric deposition accounts for 28.2 % of the δ(13)C variance in the soil, which is considerably higher than the variance in hydrocarbon concentrations of the soil explained by hydrocarbon concentrations of the atmospheric deposition (7.7 %). In contrast to δ(13)C analysis, the analysis of hydrocarbon concentrations underestimates the effect of petroleum contamination in the irrigated water and air on the surface soil. Overall, the irrigated water exerts a larger effect on the surface soil than does the atmospheric deposition.

  15. GALAXYCOUNT: a JAVA calculator of galaxy counts and variances in multiband wide-field surveys to 28 AB mag

    Science.gov (United States)

    Ellis, S. C.; Bland-Hawthorn, J.

    2007-05-01

    We provide a consistent framework for estimating galaxy counts and variances in wide-field images for a range of photometric bands. The variances include both Poissonian noise and variations due to large-scale structure. We demonstrate that our statistical theory is consistent with the counts in the deepest multiband surveys available. The statistical estimates depend on several observational parameters (e.g. seeing, signal-to-noise ratio), and include a sophisticated treatment of detection completeness. The JAVA calculator is freely available1 and offers the user the option to adopt our consistent framework or a different scheme. We also provide a summary table of statistical measures in the different bands for a range of different fields of view. Reliable estimation of the background counts has profound consequences in many areas of observational astronomy. We provide two such examples. One is from a recent study of the Sculptor galaxy NGC300 where stellar photometry has been used to demonstrate that the outer disc extends to 10 effective radii, far beyond what was thought possible for a normal low-luminosity spiral. We confirm this finding by a re-analysis of the background counts. Secondly, we determine the luminosity function of the galaxy cluster Abell 2734, both through spectroscopically determined cluster membership, and through statistical subtraction of the background galaxies using the calculator and offset fields. We demonstrate very good agreement, suggesting that expensive spectroscopic follow-up, or off-source observations, may often be bypassed via determination of the galaxy background with GALAXYCOUNT.

  16. The impact of grid and spectral nudging on the variance of the near-surface wind speed

    DEFF Research Database (Denmark)

    Vincent, Claire Louise; Hahmann, Andrea N.

    2015-01-01

    variance in the Weather Research and Forecasting model is analyzed. Simulations are run on nested domains with horizontal grid spacing 15 and 5 km over the Baltic Sea region. For the 15 km domain, 36-hr simulations initialized each day are compared with 11-day simulations with either grid or spectral......Grid and spectral nudging are effective ways of preventing drift from large scale weather patterns in regional climate models. However, the effect of nudging on the wind-speed variance is unclear. In this study, the impact of grid and spectral nudging on near-surface and upper boundary layer wind...

  17. About the probability distribution of a quantity with given mean and variance

    CERN Document Server

    Olivares, Stefano

    2012-01-01

    Supplement 1 to GUM (GUM-S1) recommends the use of maximum entropy principle (MaxEnt) in determining the probability distribution of a quantity having specified properties, e.g., specified central moments. When we only know the mean value and the variance of a variable, GUM-S1 prescribes a Gaussian probability distribution for that variable. When further information is available, in the form of a finite interval in which the variable is known to lie, we indicate how the distribution for the variable in this case can be obtained. A Gaussian distribution should only be used in this case when the standard deviation is small compared to the range of variation (the length of the interval). In general, when the interval is finite, the parameters of the distribution should be evaluated numerically, as suggested by I. Lira, Metrologia, 46 L27 (2009). Here we note that the knowledge of the range of variation is equivalent to a bias of the distribution toward a flat distribution in that range, and the principle of mini...

  18. Accounting for Cosmic Variance in Studies of Gravitationally-Lensed High-Redshift Galaxies in the Hubble Frontier Field Clusters

    CERN Document Server

    Robertson, Brant E; Dunlop, James S; McLure, Ross J; Stark, Daniel P; McLeod, Derek

    2014-01-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this {\\it Letter}, we demonstrate there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z~7 to >~65% at z~10. Previous studies of high redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  19. Accounting for Cosmic Variance in Studies of Gravitationally Lensed High-redshift Galaxies in the Hubble Frontier Field Clusters

    Science.gov (United States)

    Robertson, Brant E.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; Stark, Dan P.; McLeod, Derek

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z ~ 7 to >~ 65% at z ~ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  20. Estimation of (co)variance components and genetic parameters of greasy fleece weights in Muzaffarnagari sheep.

    Science.gov (United States)

    Mandal, A; Neser, F W C; Roy, R; Rout, P K; Notter, D R

    2009-02-01

    Variance components and genetic parameters for greasy fleece weights of Muzaffarnagari sheep maintained at the Central Institute for Research on Goats, Makhdoom, Mathura, India, over a period of 29 years (1976 to 2004) were estimated by restricted maximum likelihood (REML), fitting six animal models including various combinations of maternal effects. Data on body weights at 6 (W6) and 12 months (W12) of age were also included in the study. Records of 2807 lambs descended from 160 rams and 1202 ewes were used for the study. Direct heritability estimates for fleece weight at 6 (FW6) and 12 months of age (FW12), and total fleece weights up to 1 year of age (TFW) were 0.14, 0.16 and 0.25, respectively. Maternal genetic and permanent environmental effects did not significantly influence any of the traits under study. Genetic correlations among fleece weights and body weights were obtained from multivariate analyses. Direct genetic correlations of FW6 with W6 and W12 were relatively large, ranging from 0.61 to 0.67, but only moderate genetic correlations existed between FW12 and W6 (0.39) and between FW12 and W12 (0.49). The genetic correlation between FW6 and FW12 was very high (0.95), but the corresponding phenotypic correlation was much lower (0.28). Heritability estimates for all traits were at least 0.15, indicating that there is potential for their improvement by selection. The moderate to high positive genetic correlations between fleece weights and body weights at 6 and 12 months of age suggest that some of the genetic factors that influence animal growth also influence wool growth. Thus selection to improve the body weights or fleece weights at 6 months of age will also result in genetic improvement of fleece weights at subsequent stages of growth.

  1. Unpacking Firm Effects: Modeling Political Alliances in Variance Decomposition of Firm Performance in Turbulent Environments

    Directory of Open Access Journals (Sweden)

    Rodrigo Bandeira de Mello

    2005-01-01

    Full Text Available In this paper, firm heterogeneity in turbulent environments is addressed. It is argued that previous studies have not taken into account effects of a turbulent environment, like the Brazilian context, in which firms must face a weak and erratic government. In such an environment, the large portion of variance usually attributed to firm effects may be explained, not by the usual assumptions of mainstream scholars, but by a more ‘political’ view offirm differences, namely, the ability to manage valuable political alliances. To account for these differences, a multivariate performance measure was construed and a new factor, ‘politics effects’, has been introduced to the usual model. Company donations for campaign funds in elections was used as a proxy for this factor. A sample of 607 observations, of 177 firms in 15 sectors was used. Results suggest that the presence of politics effects were found to be not significant (using COV and Hierarchical ANOVA. However, different from previous studies, transient industry effects appear to be more important than stable effects. Findings also indicate that a better model specification for turbulent environments is needed and highlight the importance of the cost of capital.

  2. Classification of High Spatial Resolution Image Using Multi Circular Local Binary Pattern and Variance

    Directory of Open Access Journals (Sweden)

    D. Chakraborty

    2013-11-01

    Full Text Available High spatial resolution satellite image comprises of textured and non-textured regions. Hence classification of high spatial resolution satellite image either by pixel-based or texture-based classification technique does not yield good results. In this study, the Multi Circular Local Binary Pattern (MCLBP Operator and variance (VAR based algorithms are used together to transform the image for measuring the texture. The transformed image is segmented into textured and non-textured region using a threshold. Subsequently, the original image is extracted into textured and non-textured regions using this segmented image mask. Further, extracted textured region is classified using ISODATA classification algorithm considering MCLBP and VAR values of individual pixel of textured region and extracted non-textured region of the image is classified using ISODATA classification algorithm. In case of non-textured region MCLBP and VAR value of individual pixel is not considered for classification as significant textural variation is not found among different classes. Consequently the classified outputs of non-textured and textured region that are generated independently are merged together to get the final classified image. IKONOS 1m PAN images are classified using the proposed classification algorithm and found that the classification accuracy is more than 84%.

  3. Temperature variance profiles of turbulent thermal convection at high Rayleigh numbers

    Science.gov (United States)

    He, Xiaozhou; Bodenschatz, Eberhard; Ahlers, Guenter

    2016-11-01

    We present measurements of the Nusselt number Nu , and of the temperature variance σ2 as a function of vertical position z, in turbulent Rayleigh-Bénard convection of two cylindrical samples with aspect ratios (diameter D/height L) Γ = 0 . 50 and 0 . 33 . Both samples had D = 1 . 12 m but different L. We used compressed SF6 gas at pressures up to 19 bars as the fluid. The measurements covered the Rayleigh-number range 1013 < Ra < 5 ×1015 at a Prandtl number Pr = 0 . 80 . Near the side wall we found that σ2 is independent of Ra when plotted as a function of z / λ where λ ≡ L / (2 Nu) is a thermal boundary-layer thickness. The profiles σ2 (z / λ) for the two Γ values overlapped and followed a logarithmic function for 20 z / λ 120 . With the observed "-1"-scaling of the temperature power spectra and on the basis of the Perry-Townsend similarity hypothesis, we derived a fitting function σ2 =p1 ln (z / λ) +p2 +p3(z / λ) - 0 . 5 which describes the σ2 data up to z / λ = 1500 . Supported by the Max Planck Society, the Volkswagenstiftung, the DFD Sonderforschungsbereich SFB963, and NSF Grant DMR11-58514.

  4. Estimates of (co)variance components and genetic parameters for growth traits of Avikalin sheep.

    Science.gov (United States)

    Prince, Leslie Leo L; Gowane, Gopal R; Chopra, Ashish; Arora, Amrit L

    2010-08-01

    (Co)variance components and genetic parameters for various growth traits of Avikalin sheep maintained at Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, were estimated by Restricted Maximum Likelihood, fitting six animal models with various combinations of direct and maternal effects. Records of 3,840 animals descended from 257 sires and 1,194 dams were taken for this study over a period of 32 years (1977-2008). Direct heritability estimates (from best model as per likelihood ratio test) for weight at birth, weaning, 6 and 12 months of age, and average daily gain from birth to weaning, weaning to 6 months, and 6 to 12 months were 0.28 +/- 0.03, 0.20 +/- 0.03, 0.28 +/- 0.07, 0.15 +/- 0.04, 0.21 +/- 0.03, 0.16 and 0.03 +/- 0.03, respectively. Maternal heritability for traits declined as animal grows older and it was not at all evident at adult age and for post-weaning daily gain. Maternal permanent environmental effect (c(2)) declined significantly with advancement of age of animal. A small effect of c(2) on post-weaning weights was probably a carryover effect of pre-weaning maternal influence. A significant large negative genetic correlation was observed between direct and maternal genetic effects for all the traits, indicating antagonistic pleiotropy, which needs special care while formulating breeding plans. A fair rate of genetic progress seems possible in the flock by selection for all traits, but direct and maternal genetic correlation needs to be taken in to consideration.

  5. Understanding the P×S Aspect of Within-Person Variation: A Variance Partitioning Approach.

    Science.gov (United States)

    Lakey, Brian

    2015-01-01

    This article reviews a variance partitioning approach to within-person variation based on Generalizability Theory and the Social Relations Model. The approach conceptualizes an important part of within-person variation as Person × Situation (P×S) interactions: differences among persons in their profiles of responses across the same situations. The approach provided the first quantitative method for capturing within-person variation and demonstrated very large P×S effects for a wide range of constructs. These include anxiety, five-factor personality traits, perceived social support, leadership, and task performance. Although P×S effects are commonly very large, conceptual, and analytic obstacles have thwarted consistent progress. For example, how does one develop a psychological, versus purely statistical, understanding of P×S effects? How does one forecast future behavior when the criterion is a P×S effect? How can understanding P×S effects contribute to psychological theory? This review describes potential solutions to these and other problems developed in the course of conducting research on the P×S aspect of social support. Additional problems that need resolution are identified.

  6. Understanding the PxS Aspect of Within-Person Variation: A Variance Partitioning Approach

    Directory of Open Access Journals (Sweden)

    Brian eLakey

    2016-01-01

    Full Text Available This article reviews a variance partitioning approach to within-person variation based on Generalizability (G Theory and the Social Relations Model (SRM. The approach conceptualizes an important part of within-person variation as Person x Situation (PxS interactions: differences among persons in their profiles of responses across the same situations. The approach provided the first quantitative method for capturing within-person variation and demonstrated very large PxS effects for a wide range of constructs. These include anxiety, five-factor personality traits, perceived social support, leadership, and task performance. Although PxS effects are commonly very large, conceptual and analytic obstacles have thwarted consistent progress. For example, how does one develop a psychological, versus purely statistical, understanding of PxS effects? How does one forecast future behavior when the criterion is a PxS effect? How can understanding PxS effects contribute to psychological theory? This review describes potential solutions to these and other problems developed in the course of conducting research on the PxS aspect of social support. Additional problems that need resolution are identified.

  7. Variance reduction techniques for a quantitative understanding of the \\Delta I = 1/2 rule

    CERN Document Server

    Endress, Eric

    2012-01-01

    The role of the charm quark in the dynamics underlying the \\Delta I = 1/2 rule for kaon decays can be understood by studying the dependence of kaon decay amplitudes on the charm quark mass using an effective \\Delta S = 1 weak Hamiltonian in which the charm is kept as an active degree of freedom. Overlap fermions are employed in order to avoid renormalization problems, as well as to allow access to the deep chiral regime. Quenched results in the GIM limit have shown that a significant part of the enhancement is purely due to low-energy QCD effects; variance reduction techniques based on low-mode averaging were instrumental in determining the relevant weak effective lowenergy couplings in this case. Moving away from the GIM limit requires the computation of diagrams containing closed quark loops. We report on our progress to employ a combination of low-mode averaging and stochastic volume sources in order to control these contributions. Results showing a significant improvement in the statistical signal are pre...

  8. Theoretical mean-variance relationship of IP network traffic based on ON/OFF model

    Institute of Scientific and Technical Information of China (English)

    JIN Yi; ZHOU Gang; JIANG DongChen; YUAN Shuai; WANG LiLi; CAO JianTing

    2009-01-01

    Mean-variance relationship (MVR), nowadays agreed in power law form, is an important function. It Is currently used by traffic matrix estimation as a basic statistical assumption. Because all the existing papers obtain MVR only through empirical ways, they cannot provide theoretical support to power law MVR or the definition of its power exponent. Furthermore, because of the lack of theoretical model, all traffic matrix estimation methods based on MVR have not been theoretically supported yet. By observ-ing both our laboratory and campus network for more than one year, we find that such an empirical MVR is not sufficient to describe actual network traffic. In this paper, we derive a theoretical MVR from ON/OFF model. Then we prove that current empirical power law MVR is generally reasonable by the fact that it is an approximate form of theoretical MVR under specific precondition, which can theoretically support those traffic matrix estimation algorithms of using MVR. Through verifying our MVR by actual observation and public DECPKT traces, we verify that our theoretical MVR Is valid and more capable of describing actual network traffic than power law MVR.

  9. On variance of exponents for isolated surface singularities with modality ≤ 2 In Memory of Professor Philip Wagreich

    Institute of Scientific and Technical Information of China (English)

    YAU Stephen S.T.; ZUO HuaiQing

    2014-01-01

    Using the theory of the mixed Hodge structure one can define a notion of exponents of a singularity.In 2000,Hertling proposed a conjecture about the variance of the exponents of a singularity.Here,we prove that the Hertling conjecture is true for isolated surface singularities with modality ≤ 2.

  10. Impact of Cosmic Variance on the Galaxy-Halo Connection for Lyman-$\\alpha$ Emitters

    CERN Document Server

    Mejia-Restrepo, Julian E

    2016-01-01

    In this paper we study the impact of cosmic variance and observational uncertainties in constraining the mass and occupation fraction, $f_{\\rm occ}$, of dark matter halos hosting Ly-$\\alpha$ Emitting Galaxies (LAEs) at high redshift. To this end, we construct mock catalogs from an N-body simulation to match the typical size of observed fields at $z=3.1$ ($\\sim 1 {\\rm deg^2}$). In our model a dark matter halo with mass in the range $M_{\\rm min}

  11. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  12. Enhancement of photoacoustic tomography in the tissue with speed-of-sound variance using ultrasound computed tomography

    Institute of Scientific and Technical Information of China (English)

    程任翔; 陶超; 刘晓峻

    2015-01-01

    The speed-of-sound variance will decrease the imaging quality of photoacoustic tomography in acoustically inhomo-geneous tissue. In this study, ultrasound computed tomography is combined with photoacoustic tomography to enhance the photoacoustic tomography in this situation. The speed-of-sound information is recovered by ultrasound computed to-mography. Then, an improved delay-and-sum method is used to reconstruct the image from the photoacoustic signals. The simulation results validate that the proposed method can obtain a better photoacoustic tomography than the conventional method when the speed-of-sound variance is increased. In addition, the influences of the speed-of-sound variance and the fan-angle on the image quality are quantitatively explored to optimize the image scheme. The proposed method has a good performance even when the speed-of-sound variance reaches 14.2%. Furthermore, an optimized fan angle is revealed, which can keep the good image quality with a low cost of hardware. This study has a potential value in extending the biomedical application of photoacoustic tomography.

  13. Variance optimal stopping for geometric Levy processes

    DEFF Research Database (Denmark)

    Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund

    2015-01-01

    The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...

  14. Does the variance of incubation temperatures always constitute a significant selective force for origin of reptilian viviparity?

    Institute of Scientific and Technical Information of China (English)

    Hong LI; Zheng WANG; Ce CHEN; Xiang JI

    2012-01-01

    To test the hypothesis that the variance of incubation temperature may have constituted a significant selective force for reptilian viviparity,we incubated eggs of the slender forest skink Scincella modesta in five thermally different natural nests and at two constant temperatures (18 ℃ and 21 ℃).Our manipulation of incubation temperature had significant effects on incubation length and several hatchling traits (snout-vent length,tail length,fore-limb length,and sprint speed),but not on hatching success and other hatchling traits examined (body mass,head size,and hind-limb length).Incubation length was nonlinearly sensitive to temperature,but it was not correlated with the thermal variance when holding the thermal mean constant.The 18 ℃ treatment not only produced smaller sized hatchlings but also resulted in decreased sprint speed.Eggs in the nest with the greatest proportion of temperatures higher than 28 ℃ also produced smaller sized hatchlings.None of the hatchling traits examined was affected by the thermal variance.Thermal fluctuations did result in longer incubation times,but females would benefit little from maintaining stable body temperatures or selecting thermally stable nests in terms of the reduced incubation length.Our data show that the mean rather than the variance of temperatures has a key role in influencing incubation length and hatchling phenotypes,and thus do not support the hypothesis tested.

  15. Exploring the isotopic niche: isotopic variance, physiological incorporation, and the temporal dynamics of foraging

    Directory of Open Access Journals (Sweden)

    Justin Douglas Yeakel

    2016-01-01

    Full Text Available Consumer foraging behaviors are dynamic, changing in response to prey availability, seasonality, competition, and even the consumer's physiological state. The isotopic composition of a consumer is a product of these factors as well as the isotopic `landscape' of its prey, i.e. the isotopic mixing space. Stable isotope mixing models are used to back-calculate the most likely proportional contribution of a set of prey to a consumer's diet based on their respective isotopic distributions, however they are disconnected from ecological process. Here we build a mechanistic framework that links the ecological and physiological processes of an individual consumer to the isotopic distribution that describes its diet, and ultimately to the isotopic composition of its own tissues, defined as its `isotopic niche’. By coupling these processes, we systematically investigate under what conditions the isotopic niche of a consumer changes as a function of both the geometric properties of its mixing space and foraging strategies that may be static or dynamic over time. Results of our derivations reveal general insight into the conditions impacting isotopic niche width as a function of consumer specialization on prey, as well as the consumer's ability to transition between diets over time. We show analytically that moderate specialization on isotopically unique prey can serve to maximize a consumer's isotopic niche width, while temporally dynamic diets will tend to result in peak isotopic variance during dietary transitions. We demonstrate the relevance of our theoretical findings by examining a marine system composed of nine invertebrate species commonly consumed by sea otters. In general, our analytical framework highlights the complex interplay of mixing space geometry and consumer dietary behavior in driving expansion and contraction of the isotopic niche. Because this approach is established on ecological mechanism, it is well-suited for enhancing the

  16. BDNF contributes to the genetic variance of milk fat yield in German Holstein cattle

    Directory of Open Access Journals (Sweden)

    Lea G. Zielke

    2011-04-01

    Full Text Available AbstractThe gene encoding the brain derived neurotrophic factor (BDNF has been repeatedly associated with human obesity. As such, it could also contribute to the regulation of energy partitioning and the amount of secreted milk fat during lactation, which plays an important role in milk production in dairy cattle. Therefore, we performed an association study using estimated breeding values of bulls and yield deviations of German Holstein dairy cattle to test the effect of BDNF on milk fat yield. A highly significant effect (corrected p-value =3.362 x10-4 was identified for an SNP 168 kb up-stream of the BDNF transcription start. The association tests provided evidence for an additive allele effect of 5.13 kg of fat per lactation on the estimated breeding value for milk fat yield in bulls and 6.80 kg of fat of the own production performance in cows explaining 1.72% and 0.60% of the phenotypic variance in the analysed populations, respectively. The analyses of bulls and cows consistently showed three haplotype groups that differed significantly from each other, suggesting at least two different mutations in the BDNF-region affecting the milk fat yield. The fat yield increasing alleles also had low but significant positive effects on protein and total milk yield which suggests a general role of the BDNF-region in energy partitioning, rather than a specific regulation of fat synthesis. The results obtained in dairy cattle suggest similar effects of BDNF on milk composition in other species, including man.

  17. A data variance technique for automated despiking of magnetotelluric data with a remote reference

    Energy Technology Data Exchange (ETDEWEB)

    Kappler, K.

    2011-02-15

    The magnetotelluric method employs co-located surface measurements of electric and magnetic fields to infer the local electrical structure of the earth. The frequency-dependent 'apparent resistivity' curves can be inaccurate at long periods if input data are contaminated - even when robust remote reference techniques are employed. Data despiking prior to processing can result in significantly more reliable estimates of long period apparent resistivities. This paper outlines a two-step method of automatic identification and replacement for spike-like contamination of magnetotelluric data; based on the simultaneity of natural electric and magnetic field variations at distant sites. This simultaneity is exploited both to identify windows in time when the array data are compromised, and to generate synthetic data that replace observed transient noise spikes. In the first step, windows in data time series containing spikes are identified via intersite comparison of channel 'activity' - such as the variance of differenced data within each window. In the second step, plausible data for replacement of flagged windows is calculated by Wiener filtering coincident data in clean channels. The Wiener filters - which express the time-domain relationship between various array channels - are computed using an uncontaminated segment of array training data. Examples are shown where the algorithm is applied to artificially contaminated data, and to real field data. In both cases all spikes are successfully identified. In the case of implanted artificial noise, the synthetic replacement time series are very similar to the original recording. In all cases, apparent resistivity and phase curves obtained by processing the despiked data are much improved over curves obtained from raw data.

  18. Influence of monte carlo variance with fluence smoothing in VMAT treatment planning with Monaco TPS

    Directory of Open Access Journals (Sweden)

    B Sarkar

    2016-01-01

    Full Text Available Introduction: The study aimed to investigate the interplay between Monte Carlo Variance (MCV and fluence smoothing factor (FSF in volumetric modulated arc therapy treatment planning by using a sample set of complex treatment planning cases and a X-ray Voxel Monte Carlo–based treatment planning system equipped with tools to tune fluence smoothness as well as MCV. Materials and Methods: The dosimetric (dose to tumor volume, and organ at risk and physical characteristic (treatment time, number of segments, and so on of a set 45 treatment plans for all combinations of 1%, 3%, 5% MCV and 1, 3, 5 FSF were evaluated for five carcinoma esophagus cases under the study. Result: Increase in FSF reduce the treatment time. Variation of MCV and FSF gives a highest planning target volume (PTV, heart and lung dose variation of 3.6%, 12.8% and 4.3%, respectively. The heart dose variation was highest among all organs at risk. Highest variation of spinal cord dose was 0.6 Gy. Conclusion: Variation of MCV and FSF influences the organ at risk (OAR doses significantly but not PTV coverage and dose homogeneity. Variation in FSF causes difference in dosimetric and physical parameters for the treatment plans but variation of MCV does not. MCV 3% or less do not improve the plan quality significantly (physical and clinical compared with MCV greater than 3%. The use of MCV between 3% and 5% gives similar results as 1% with lesser calculation time. Minimally detected differences in plan quality suggest that the optimum FSF can be set between 3 and 5.

  19. Growth rates and variances of unexploited wolf populations in dynamic equilibria

    Science.gov (United States)

    Mech, L. David; Fieberg, John

    2015-01-01

    Several states have begun harvesting gray wolves (Canis lupus), and these states and various European countries are closely monitoring their wolf populations. To provide appropriate perspective for determining unusual or extreme fluctuations in their managed wolf populations, we analyzed natural, long-term, wolf-population-density trajectories totaling 130 years of data from 3 areas: Isle Royale National Park in Lake Superior, Michigan, USA; the east-central Superior National Forest in northeastern Minnesota, USA; and Denali National Park, Alaska, USA. Ratios between minimum and maximum annual sizes for 2 mainland populations (n = 28 and 46 yr) varied from 2.5–2.8, whereas for Isle Royale (n = 56 yr), the ratio was 6.3. The interquartile range (25th percentile, 75th percentile) for annual growth rates, Nt+1/Nt, was (0.88, 1.14), (0.92, 1.11), and (0.86, 1.12) for Denali, Superior National Forest, and Isle Royale respectively. We fit a density-independent model and a Ricker model to each time series, and in both cases we considered the potential for observation error. Mean growth rates from the density-independent model were close to 0 for all 3 populations, with 95% credible intervals including 0. We view the estimated model parameters, including those describing annual variability or process variance, as providing useful summaries of the trajectories of these populations. The estimates of these natural wolf population parameters can serve as benchmarks for comparison with those of recovering wolf populations. Because our study populations were all from circumscribed areas, fluctuations in them represent fluctuations in densities (i.e., changes in numbers are not confounded by changes in occupied area as would be the case with populations expanding their range, as are wolf populations in many states).

  20. Heterogeneity of variance components for preweaning growth in Romane sheep due to the number of lambs reared

    Directory of Open Access Journals (Sweden)

    Poivey Jean-Paul

    2011-09-01

    Full Text Available Abstract Background The pre-weaning growth rate of lambs, an important component of meat market production, is affected by maternal and direct genetic effects. The French genetic evaluation model takes into account the number of lambs suckled by applying a multiplicative factor (1 for a lamb reared as a single, 0.7 for twin-reared lambs to the maternal genetic effect, in addition to including the birth*rearing type combination as a fixed effect, which acts on the mean. However, little evidence has been provided to justify the use of this multiplicative model. The two main objectives of the present study were to determine, by comparing models of analysis, 1 whether pre-weaning growth is the same trait in single- and twin-reared lambs and 2 whether the multiplicative coefficient represents a good approach for taking this possible difference into account. Methods Data on the pre-weaning growth rate, defined as the average daily gain from birth to 45 days of age on 29,612 Romane lambs born between 1987 and 2009 at the experimental farm of La Sapinière (INRA-France were used to compare eight models that account for the number of lambs per dam reared in various ways. Models were compared using the Akaike information criteria. Results The model that best fitted the data assumed that 1 direct (maternal effects correspond to the same trait regardless of the number of lambs reared, 2 the permanent environmental effects and variances associated with the dam depend on the number of lambs reared and 3 the residual variance depends on the number of lambs reared. Even though this model fitted the data better than a model that included a multiplicative coefficient, little difference was found between EBV from the different models (the correlation between EBV varied from 0.979 to 0.999. Conclusions Based on experimental data, the current genetic evaluation model can be improved to better take into account the number of lambs reared. Thus, it would be of

  1. Estimation of Turbulent Fluxes Using the Flux-Variance Method over an Alpine Meadow Surface in the Eastern Tibetan Plateau

    Institute of Scientific and Technical Information of China (English)

    WANG Shaoying; ZHANG Yu; L(U) Shihua; LIU Heping; SHANG Lunyu

    2013-01-01

    The flux-variance similarity relation and the vertical transfer of scalars exhibit dissimilarity over different types of surfaces,resulting in different parameterization approaches of relative transport efficiency among scalars to estimate turbulent fluxes using the flux-variance method.We investigated these issues using eddycovariance measurements over an open,homogeneous and flat grassland in the eastern Tibetan Plateau in summer under intermediate hydrological conditions during rainy season.In unstable conditions,the temperature,water vapor,and CO2 followed the flux-variance similarity relation,but did not show in precisely the same way due to different roles (active or passive) of these scalars.Similarity constants of temperature,water vapor and CO2 were found to be 1.12,1.19 and 1.17,respectively.Heat transportation was more efficient than water vapor and CO2.Based on the estimated sensible heat flux,five parameterization methods of relative transport efficiency of heat to water vapor and CO2 were examined to estimate latent heat and CO2 fluxes.The strategy of local determination of flux-variance similarity relation is recommended for the estimation of latent heat and CO2 fluxes.This approach is better for representing the averaged relative transport efficiency,and technically easier to apply,compared to other more complex ones.

  2. SNP-Based Heritability Estimates of Common and Specific Variance in Self- and Informant-Reported Neuroticism Scales

    NARCIS (Netherlands)

    Realo, Anu; van der Most, Peter J; Allik, Jüri; Esko, Tõnu; Jeronimus, Bertus F; Kööts-Ausmees, Liisi; Mõttus, René; Tropf, Felix C; Snieder, Harold; Ormel, Johan

    2016-01-01

    OBJECTIVE: Our study aims to estimate the proportion of the phenotypic variance of Neuroticism and its facet scales that can be attributed to common SNPs in two adult populations from Estonia (EGCUT; N = 3,292) and the Netherlands (Lifelines; N = 13,383). METHOD: Genomic-Relatedness-Matrix Restricte

  3. Direct and indirect measures of spider fear predict unique variance in children’s fear-related behaviour

    NARCIS (Netherlands)

    Klein, A.M.; Becker, Eni; Rinck, M.

    2011-01-01

    This study investigated whether direct and indirect measures predict unique variance components of fearful behaviour in children. One hundred eighty-nine children aged between 9 and 12 performed a pictorial version of the emotional Stroop task (EST), filled out the Spider Anxiety and Disgust Screeni

  4. Cognitive and Linguistic Sources of Variance in 2-Year-Olds' Speech-Sound Discrimination: A Preliminary Investigation

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2014-01-01

    Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…

  5. Methods to optimize livestock breeding programs with genotype by environment interaction and genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.

    2007-01-01

    Genotype by environment interaction (G × E) and genetic heterogeneity of environmental variance are both related to genetic variation in environmental sensitivity. Both phenomena can have consequences for livestock breeding programs. This thesis focuses on developing methods to optimize livestock br

  6. Commentary--A United Front: Using the Range of Psychological Variance in Cutting-Edge Practice and Emerging Research

    Science.gov (United States)

    Jackson, Simon Anthony; Kleitman, Sabina

    2015-01-01

    Psychological and behavioral variance can be explained by differences in the environment, and between and within individuals. Almost 60 years ago, Cronbach (1957) called for converging investigations into all three sources as important for the development of accurate science and useful applications in the real world. Yet rifts among researchers…

  7. Replication of a gene-environment interaction Via Multimodel inference: additive-genetic variance in adolescents' general cognitive ability increases with family-of-origin socioeconomic status.

    Science.gov (United States)

    Kirkpatrick, Robert M; McGue, Matt; Iacono, William G

    2015-03-01

    The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research.

  8. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....

  9. Multiperiod mean-variance efficient portfolios with endogenous liabilities

    OpenAIRE

    Markus LEIPPOLD; Trojani, Fabio; Vanini, Paolo

    2011-01-01

    We study the optimal policies and mean-variance frontiers (MVF) of a multiperiod mean-variance optimization of assets and liabilities (AL). This makes the analysis more challenging than for a setting based on purely exogenous liabilities, in which the optimization is only performed on the assets while keeping liabilities fixed. We show that, under general conditions for the joint AL dynamics, the optimal policies and the MVF can be decomposed into an orthogonal set of basis returns using exte...

  10. 语言学实验研究中基于项目的方差分析——兼为一篇被指虚构统计的论文辩护%Analysis of variance by items in linguistic experiments

    Institute of Scientific and Technical Information of China (English)

    杨旭

    2012-01-01

    When a linguistic experiment consists of both subject samples and item samples, it is a better way to apply analysis of variance (ANOVA) by subjects and by items respectively to test the same effect. However, this kind of approach is not commonly used in foreign language empirical researches in China. Some researches adopting this approach were even criticized as "fabricate statistic results". The present paper first provides a brief introduction to ANOVA by items. Then it discusses some related issues and provides operational steps in SPSS. Finally, it makes a defense against the strong criticism to a certain paper in which each effect was tested by two F-tests. Two F-tests from ANOVA by subjects and by items in that paper are appropriate whereas the critic is completely ignorant about the ANOVA by items.%当一个语言学实验同时有被试样本和项目样本时,基于被试和基于项目做两次方差分析检验同一个效应。是比较科学的做法.这样的研究在外语界不仅少见。而且有的还被批评为虚构统计.基于项目的方差分析.讨论其适用范围,给出具体做法和SPSS操作步骤.为某篇被批评的外语实证研究论文做方法上的辩护.指出了其中每个效应出现两个F检验的来由.

  11. Impact of Cosmic Variance on the Galaxy-Halo Connection for Lyα Emitters

    Science.gov (United States)

    Mejía-Restrepo, Julián E.; Forero-Romero, Jaime E.

    2016-09-01

    In this paper we study the impact of cosmic variance and observational uncertainties in constraining the mass and occupation fraction, {f}{{occ}}, of dark matter (DM) halos hosting Lyα-emitting galaxies (LAEs) at high redshift. To this end, we construct mock catalogs from an N-body simulation to match the typical size of observed fields at z = 3.1 (˜ 1 {\\deg }2). In our model a DM halo with mass in the range {M}{{\\min }}\\lt {M}{{h}}\\lt {M}{{\\max }} can only host one detectable LAE at most. We proceed to explore the parameter space determined by {M}{{\\min }}, {M}{{\\max }}, and {f}{{occ}} with a Markov Chain Monte Carlo algorithm using the angular correlation function and the LAEs’ number density as observational constraints. We find that the preferred minimum and maximum masses in our model span a wide range {10}10.0{h}-1{M}⊙ ≤slant {M}{{\\min }}≤slant {10}11.1{h}-1{M}⊙ , {10}11.0{h}-1{M}⊙ ≤slant {M}{{\\max }}≤slant {10}13.0{h}-1{M}⊙ , followed by a wide range in the occupation fraction 0.02≤slant {f}{{occ}}≤slant 0.30. As a consequence, the median mass, M 50, of all the consistent models has a large uncertainty {M}50={3.16}-2.37+9.34× {10}10 {h}-1{M}⊙ . However, we find that the same individual models have a relatively tight 1σ scatter around the median mass {{Δ }}{M}1σ ={0.55}-0.31+0.11 dex. We are also able to show that {f}{{occ}} is uniquely determined by {M}{{\\min }}, regardless of {M}{{\\max }}. We argue that upcoming large surveys covering at least 25 deg2 should be able to put tighter constraints on {M}{{\\min }} and {f}{{occ}} through the LAE number density distribution width constructed over several fields of ˜1 deg2.

  12. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  13. Effect of Variances and Manufacturing Tolerances on the Design Strength and Life of Mechanically Fastened Composite Joints

    Science.gov (United States)

    1978-12-01

    AD-A041/70 ,4 P𔄁operty of US Air .For, AAIWZ L1 brary AFFDLTR 78-179 ’Wrlght.Peatt Orson AF’B, EFFECT OF VARIANCES AND MANUFACTURING TOLERANCES ON...Degradation For Advanced Composites", Lockheed-California F33615-77-C-3084, Quar- terlies 1977 to Present. Phillips, D. C. and Scott , J. M., "The Shear

  14. A comprehensive comparison of normalization methods for loading control and variance stabilization of reverse-phase protein array data.

    Science.gov (United States)

    Liu, Wenbin; Ju, Zhenlin; Lu, Yiling; Mills, Gordon B; Akbani, Rehan

    2014-01-01

    Loading control (LC) and variance stabilization of reverse-phase protein array (RPPA) data have been challenging mainly due to the small number of proteins in an experiment and the lack of reliable inherent control markers. In this study, we compare eight different normalization methods for LC and variance stabilization. The invariant marker set concept was first applied to the normalization of high-throughput gene expression data. A set of "invariant" markers are selected to create a virtual reference sample. Then all the samples are normalized to the virtual reference. We propose a variant of this method in the context of RPPA data normalization and compare it with seven other normalization methods previously reported in the literature. The invariant marker set method performs well with respect to LC, variance stabilization and association with the immunohistochemistry/florescence in situ hybridization data for three key markers in breast tumor samples, while the other methods have inferior performance. The proposed method is a promising approach for improving the quality of RPPA data.

  15. Phase variance optical coherence microscopy for label-free imaging of the developing vasculature in zebrafish embryos

    Science.gov (United States)

    Chen, Yu; Trinh, Le A.; Fingler, Jeff; Fraser, Scott E.

    2016-12-01

    A phase variance optical coherence microscope (pvOCM) has been created to image blood flow in the microvasculature of zebrafish embryos, without the use of exogenous labels. The pvOCM imaging system has axial and lateral resolutions of 2.8 μm in tissue and imaging depth of more than 100 μm. Images of 2 to 5 days postfertilization zebrafish embryos identified the detailed anatomical structure based on OCM intensity contrast. Phase variance contrast offered visualization of blood flow in the arteries, veins, and capillaries. The pvOCM images of the vasculature were confirmed by direct comparisons with fluorescence microscopy images of transgenic embryos in which the vascular endothelium is labeled with green fluorescent protein. The ability of pvOCM to capture activities of regional blood flow permits it to reveal functional information that is of great utility for the study of vascular development.

  16. Assessing the uncertainty of glacier mass-balance simulations in the European Arctic based on variance decomposition

    Science.gov (United States)

    Sauter, T.; Obleitner, F.

    2015-12-01

    State-of-the-art numerical snowpack models essentially rely on observational data for initialization, forcing, parametrization, and validation. Such data are available in increasing amounts, but the propagation of related uncertainties in simulation results has received rather limited attention so far. Depending on their complexity, even small errors can have a profound effect on simulations, which dilutes our confidence in the results. This paper aims at quantification of the overall and fractional contributions of some archetypical measurement uncertainties on snowpack simulations in arctic environments. The sensitivity pattern is studied at two sites representing the accumulation and ablation area of the Kongsvegen glacier (Svalbard), using the snowpack scheme Crocus. The contribution of measurement errors on model output variance, either alone or by interaction, is decomposed using global sensitivity analysis. This allows one to investigate the temporal evolution of the fractional contribution of different factors on key model output metrics, which provides a more detailed understanding of the model's sensitivity pattern. The analysis demonstrates that the specified uncertainties in precipitation and long-wave radiation forcings had a strong influence on the calculated surface-height changes and surface-energy balance components. The model output sensitivity patterns also revealed some characteristic seasonal imprints. For example, uncertainties in long-wave radiation affect the calculated surface-energy balance throughout the year at both study sites, while precipitation exerted the most influence during the winter and at the upper site. Such findings are valuable for identifying critical parameters and improving their measurement; correspondingly, updated simulations may shed new light on the confidence of results from snow or glacier mass- and energy-balance models. This is relevant for many applications, for example in the fields of avalanche and

  17. The ALHAMBRA survey: An empirical estimation of the cosmic variance for merger fraction studies based on close pairs

    Science.gov (United States)

    López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Varela, J.; Molino, A.; Arnalte-Mur, P.; Ascaso, B.; Castander, F. J.; Fernández-Soto, A.; Huertas-Company, M.; Márquez, I.; Martínez, V. J.; Masegosa, J.; Moles, M.; Pović, M.; Aguerri, J. A. L.; Alfaro, E.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; Del Olmo, A.; González Delgado, R. M.; Husillos, C.; Infante, L.; Perea, J.; Prada, F.; Quintana, J. M.

    2014-04-01

    Aims: Our goal is to estimate empirically the cosmic variance that affects merger fraction studies based on close pairs for the first time. Methods: We compute the merger fraction from photometric redshift close pairs with 10 h-1 kpc ≤ rp ≤ 50 h-1 kpc and Δv ≤ 500 km s-1 and measure it in the 48 sub-fields of the ALHAMBRA survey. We study the distribution of the measured merger fractions that follow a log-normal function and estimate the cosmic variance σv as the intrinsic dispersion of the observed distribution. We develop a maximum likelihood estimator to measure a reliable σv and avoid the dispersion due to the observational errors (including the Poisson shot noise term). Results: The cosmic variance σv of the merger fraction depends mainly on (i) the number density of the populations under study for both the principal (n1) and the companion (n2) galaxy in the close pair and (ii) the probed cosmic volume Vc. We do not find a significant dependence on either the search radius used to define close companions, the redshift, or the physical selection (luminosity or stellar mass) of the samples. Conclusions: We have estimated the cosmic variance that affects the measurement of the merger fraction by close pairs from observations. We provide a parametrisation of the cosmic variance with n1, n2, and Vc, σv ∝ n1-0.54Vc-0.48 (n_2/n_1)-0.37 . Thanks to this prescription, future merger fraction studies based on close pairs could properly account for the cosmic variance on their results. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (IAA-CSIC).Appendix is available in electronic form at http://www.aanda.org

  18. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Directory of Open Access Journals (Sweden)

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  19. Robust variance-constrained control for a class of continuous time-delay systems with parameter uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Yang Kailiang [Department of Automation, Shanghai Jiaotong University, 800 Dong Chuan Road, Shanghai 200240 (China); Lu Junguo [Department of Automation, Shanghai Jiaotong University, 800 Dong Chuan Road, Shanghai 200240 (China)], E-mail: jglu@sjtu.edu.cn

    2009-03-15

    In this paper, we consider the robust variance-constrained control problem for uncertain linear continuous time-delay systems subjected to parameter uncertainties. The purpose of this multi-objective control problem is to design a static state feedback controller that does not depend on the parameter uncertainties such that the resulting closed-loop system is asymptotically stable and the steady-state variance of each state is not more than the individual pre-specified value simultaneously. Using the linear matrix inequality approach, the existence conditions of such controllers are derived. A parameterized representation of the desired controllers is presented in terms of the feasible solutions to a certain linear matrix inequality system. An illustrative numerical example is provided to demonstrate the effectiveness of the proposed results.

  20. The effect of heterogeneous variance on efficiency and power of cluster randomized trials with a balanced 2 × 2 factorial design.

    Science.gov (United States)

    Lemme, Francesca; van Breukelen, Gerard J P; Candel, Math J J M; Berger, Martijn P F

    2015-10-01

    Sample size calculation for cluster randomized trials (CRTs) with a [Formula: see text] factorial design is complicated due to the combination of nesting (of individuals within clusters) with crossing (of two treatments). Typically, clusters and individuals are allocated across treatment conditions in a balanced fashion, which is optimal under homogeneity of variance. However, the variance is likely to be heterogeneous if there is a treatment effect. An unbalanced allocation is then more efficient, but impractical because the optimal allocation depends on the unknown variances. Focusing on CRTs with a [Formula: see text] design, this paper addresses two questions: How much efficiency is lost by having a balanced design when the outcome variance is heterogeneous? How large must the sample size be for a balanced allocation to have sufficient power under heterogeneity of variance? We consider different scenarios of heterogeneous variance. Within each scenario, we determine the relative efficiency of a balanced design, as a function of the level (cluster, individual, both) and amount of heterogeneity of the variance. We then provide a simple correction of the sample size for the loss of power due to heterogeneity of variance when a balanced allocation is used. The theory is illustrated with an example of a published 2 x2 CRT.