WorldWideScience

Sample records for analysis of variance

  1. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  2. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  3. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…

  4. Estimation of analysis and forecast error variances

    Directory of Open Access Journals (Sweden)

    Malaquias Peña

    2014-11-01

    Full Text Available Accurate estimates of error variances in numerical analyses and forecasts (i.e. difference between analysis or forecast fields and nature on the resolved scales are critical for the evaluation of forecasting systems, the tuning of data assimilation (DA systems and the proper initialisation of ensemble forecasts. Errors in observations and the difficulty in their estimation, the fact that estimates of analysis errors derived via DA schemes, are influenced by the same assumptions as those used to create the analysis fields themselves, and the presumed but unknown correlation between analysis and forecast errors make the problem difficult. In this paper, an approach is introduced for the unbiased estimation of analysis and forecast errors. The method is independent of any assumption or tuning parameter used in DA schemes. The method combines information from differences between forecast and analysis fields (‘perceived forecast errors’ with prior knowledge regarding the time evolution of (1 forecast error variance and (2 correlation between errors in analyses and forecasts. The quality of the error estimates, given the validity of the prior relationships, depends on the sample size of independent measurements of perceived errors. In a simulated forecast environment, the method is demonstrated to reproduce the true analysis and forecast error within predicted error bounds. The method is then applied to forecasts from four leading numerical weather prediction centres to assess the performance of their corresponding DA and modelling systems. Error variance estimates are qualitatively consistent with earlier studies regarding the performance of the forecast systems compared. The estimated correlation between forecast and analysis errors is found to be a useful diagnostic of the performance of observing and DA systems. In case of significant model-related errors, a methodology to decompose initial value and model-related forecast errors is also

  5. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  6. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Science.gov (United States)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  7. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  8. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    Science.gov (United States)

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533

  9. Analysis of Variance in the Modern Design of Experiments

    Science.gov (United States)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  10. Case study using analysis of variance to determine groups’ variations

    Directory of Open Access Journals (Sweden)

    Ardelean Flavius A.

    2017-01-01

    Full Text Available This paper aims to present the analysis of a part manufactured in three shifts, which has a specific characteristic dimension, using DFSS (Design for Six Sigma ANOVA (Analysis of Variance method. In every shift, the significant characteristic, “SC”, dimension should be produced within the given tolerance. The question that arises is: “Does the shift have any influence on the “SC” dimension realization?” By using the one way ANOVA method, one can observe the variation between the means of each of the three shifts. Afterwards, specific action can be undertaken to adjust, if necessary, the differences between the shifts.

  11. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  12. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  13. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... to be simple duplicates for testing the effect of two induced factors-apical or basolateral addition of radioactive precursors and different apical media-on the incorporation of 14C-acetate and 32Pphosphate intotissue lipids. Unfortunately, they did not altogether give the same result. By accepting this fact...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...

  14. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  15. Model Based Analysis of the Variance Estimators for the Combined ...

    African Journals Online (AJOL)

    In this paper we study the variance estimators for the combined ratio estimator under an appropriate asymptotic framework. An alternative bias-robust variance estimator, different from that suggested by Valliant (1987), is derived. Several variance estimators are compared in an empirical study using a real population.

  16. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  17. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

    Science.gov (United States)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  18. Analysis of experiments in square lattice with emphasis on variance components. i. Individual analysis

    OpenAIRE

    Silva,Heyder Diniz; Regazzi,Adair José; Cruz,Cosme Damião; Viana,José Marcelo Soriano

    1999-01-01

    This paper focused on four alternatives of analysis of experiments in square lattice as far as the estimation of variance components and some genetic parameters are concerned: 1) intra-block analysis with adjusted treatment and blocks within unadjusted repetitions; 2) lattice analysis as complete randomized blocks; 3) intrablock analysis with unadjusted treatment and blocks within adjusted repetitions; 4) lattice analysis as complete randomized blocks, by utilizing the adjusted means of treat...

  19. A Large-Scale Analysis of Variance in Written Language.

    Science.gov (United States)

    Johns, Brendan T; Jamieson, Randall K

    2018-01-22

    The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers, & Tenenbaum, ; Jones & Mewhort, ; Landauer & Dumais, ; Mikolov, Sutskever, Chen, Corrado, & Dean, ). The models treat knowledge as an interaction of processing mechanisms and the structure of language experience. But language experience is often treated agnostically. We report a distributional semantic analysis that shows written language in fiction books varies appreciably between books from the different genres, books from the same genre, and even books written by the same author. Given that current theories assume that word knowledge reflects an interaction between processing mechanisms and the language environment, the analysis shows the need for the field to engage in a more deliberate consideration and curation of the corpora used in computational studies of natural language processing. Copyright © 2018 Cognitive Science Society, Inc.

  20. The Variance Normalization Method of Ridge Regression Analysis.

    Science.gov (United States)

    Bulcock, J. W.; And Others

    The testing of contemporary sociological theory often calls for the application of structural-equation models to data which are inherently collinear. It is shown that simple ridge regression, which is commonly used for controlling the instability of ordinary least squares regression estimates in ill-conditioned data sets, is not a legitimate…

  1. Variance heterogeneity analysis for detection of potentially interacting genetic loci: method and its limitations

    Directory of Open Access Journals (Sweden)

    van Duijn Cornelia

    2010-10-01

    Full Text Available Abstract Background Presence of interaction between a genotype and certain factor in determination of a trait's value, it is expected that the trait's variance is increased in the group of subjects having this genotype. Thus, test of heterogeneity of variances can be used as a test to screen for potentially interacting single-nucleotide polymorphisms (SNPs. In this work, we evaluated statistical properties of variance heterogeneity analysis in respect to the detection of potentially interacting SNPs in a case when an interaction variable is unknown. Results Through simulations, we investigated type I error for Bartlett's test, Bartlett's test with prior rank transformation of a trait to normality, and Levene's test for different genetic models. Additionally, we derived an analytical expression for power estimation. We showed that Bartlett's test has acceptable type I error in the case of trait following a normal distribution, whereas Levene's test kept nominal Type I error under all scenarios investigated. For the power of variance homogeneity test, we showed (as opposed to the power of direct test which uses information about known interacting factor that, given the same interaction effect, the power can vary widely depending on the non-estimable direct effect of the unobserved interacting variable. Thus, for a given interaction effect, only very wide limits of power of the variance homogeneity test can be estimated. Also we applied Levene's approach to test genome-wide homogeneity of variances of the C-reactive protein in the Rotterdam Study population (n = 5959. In this analysis, we replicate previous results of Pare and colleagues (2010 for the SNP rs12753193 (n = 21, 799. Conclusions Screening for differences in variances among genotypes of a SNP is a promising approach as a number of biologically interesting models may lead to the heterogeneity of variances. However, it should be kept in mind that the absence of variance heterogeneity for

  2. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  3. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    Science.gov (United States)

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  4. Inheritance of dermatoglyphic traits in twins: univariate and bivariate variance decomposition analysis.

    Science.gov (United States)

    Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene

    2012-01-01

    Dermatoglyphic traits in a sample of twins were analyzed to estimate the resemblance between MZ and DZ twins and to evaluate the mode of inheritance by using the maximum likelihood-based Variance decomposition analysis. The additive genetic variance component was significant in both sexes for four traits--PII, AB_RC, RC_HB, and ATD_L. AB RC and RC_HB had significant sex differences in means, whereas PII and ATD_L did not. The results of the Bivariate Variance decomposition analysis revealed that PII and RC_HB have a significant correlation in both genetic and residual components. Significant correlation in the additive genetic variance between AB_RC and ATD_L was observed. The same analysis only for the females sub-sample in the three traits RBL, RBR and AB_DIS shows that the additive genetic RBR component was significant and the AB_DIS sibling component was not significant while others cannot be constrained to zero. The additive variance for AB DIS sibling component was not significant. The three components additive, sibling and residual were significantly correlated between each pair of traits revealed by the Bivariate Variance decomposition analysis.

  5. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    Science.gov (United States)

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  6. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  7. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  8. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  9. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    Science.gov (United States)

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  10. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  11. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  12. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    Science.gov (United States)

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  13. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  14. SAS/IML Macros for a Multivariate Analysis of Variance Based on Spatial Signs

    Directory of Open Access Journals (Sweden)

    Jaakko Nevalainen

    2006-05-01

    Full Text Available Recently, new nonparametric multivariate extensions of the univariate sign methods have been proposed. Randles (2000 introduced an affine invariant multivariate sign test for the multivariate location problem. Later on, Hettmansperger and Randles (2002 considered an affine equivariant multivariate median corresponding to this test. The new methods have promising efficiency and robustness properties. In this paper, we review these developments and compare them with the classical multivariate analysis of variance model. A new SAS/IML tool for performing a spatial sign based multivariate analysis of variance is introduced.

  15. Heterogeneity of large macromolecular complexes revealed by 3-D cryo-EM variance analysis

    Science.gov (United States)

    Zhang, Wei; Kimmel, Marek; Spahn, Christian M.T.; Penczek, Pawel A.

    2008-01-01

    Macromolecular structure determination by cryo-electron microscopy (EM) and single particle analysis are based on the assumption that imaged molecules have identical structure. With the increased size of processed datasets it becomes apparent that many complexes coexist in a mixture of conformational states or contain flexible regions. As the cryo-EM data is collected in form of projections of imaged molecules, the information about variability of reconstructed density maps is not directly available. To address this problem, we describe a new implementation of the bootstrap resampling technique that yields estimates of voxel-by-voxel variance of a structure reconstructed from the set of its projections. We introduced a novel highly efficient reconstruction algorithm that is based on direct Fourier inversion and which incorporates correction for the transfer function of the microscope, thus extending the resolution limits of variance estimation. We also describe a validation method to determine the number of resampled volumes required to achieve stable estimate of the variance. The proposed bootstrap method was applied to a dataset of 70S ribosome complexed with tRNA and the elongation factor G. The variance map revealed regions of high variability: the L1 protein, the EF-G and the 30S head and the ratchet-like subunit rearrangement. The proposed method of variance estimation opens new possibilities for single particle analysis, by extending applicability of the technique to heterogeneous datasets of macromolecules, and to complexes with significant conformational variability. PMID:19081053

  16. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    Science.gov (United States)

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  17. Use of hypotheses for analysis of variance models: challenging the current practice

    NARCIS (Netherlands)

    van Wesel, F.; Boeije, H.R.; Hoijtink, H.

    2013-01-01

    In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current

  18. Use of hypotheses for analysis of variance Models: Challenging the current practice

    NARCIS (Netherlands)

    van Wesel, F.; Boeije, H.R.; Hoijtink, H

    2013-01-01

    In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current

  19. Use of hypotheses for analysis of variance models: challenging the current practice.

    NARCIS (Netherlands)

    Wesel, F. van; Boeije, H.; Hoijtink, H.

    2013-01-01

    In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current

  20. The microcomputer scientific software series 3: general linear model--analysis of variance.

    Science.gov (United States)

    Harold M. Rauscher

    1985-01-01

    A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...

  1. Structure analysis of interstellar clouds - I. Improving the Delta-variance method

    NARCIS (Netherlands)

    Ossenkopf, V.; Krips, M.; Stutzki, J.

    Context. The Delta-variance analysis, introduced as a wavelet-based measure for the statistical scaling of structures in astronomical maps, has proven to be an efficient and accurate method of characterising the power spectrum of interstellar turbulence. It has been applied to observed molecular

  2. Structure analysis of interstellar clouds - II. Applying the Delta-variance method to interstellar turbulence

    NARCIS (Netherlands)

    Ossenkopf, V.; Krips, M.; Stutzki, J.

    Context. The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. It has been applied both to simulations of interstellar turbulence and to observed molecular cloud maps. In Paper I we proposed essential

  3. WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.

    Science.gov (United States)

    Grech, Victor

    2018-03-03

    Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Toward an objective evaluation of teacher performance: The use of variance partitioning analysis, VPA.

    Directory of Open Access Journals (Sweden)

    Eduardo R. Alicias

    2005-05-01

    Full Text Available Evaluation of teacher performance is usually done with the use of ratings made by students, peers, and principals or supervisors, and at times, selfratings made by the teachers themselves. The trouble with this practice is that it is obviously subjective, and vulnerable to what Glass and Martinez call the "politics of teacher evaluation," as well as to professional incapacities of the raters. The value-added analysis (VAA model is one attempt to make evaluation objective and evidenced-based. However, the VAA model'especially that of the Tennessee Value Added Assessment System (TVAAS developed by William Sanders'appears flawed essentially because it posits the untenable assumption that the gain score of students (value added is attributable only and only to the teacher(s, ignoring other significant explanators of student achievement like IQ and socio-economic status. Further, the use of the gain score (value-added as a dependent variable appears hobbled with the validity threat called "statistical regression," as well as the problem of isolating the conflated effects of two or more teachers. The proposed variance partitioning analysis (VPA model seeks to partition the total variance of the dependent variable (post-test student achievement into various portions representing: first, the effects attributable to the set of teacher factors; second, effects attributable to the set of control variables the most important of which are IQ of the student, his pretest score on that particular dependent variable, and some measures of his socio-economic status; and third, the unexplained effects/variance. It is not difficult to see that when the second and third quanta of variance are partitioned out of the total variance of the dependent variable, what remains is that attributable to the teacher. Two measures of teacher effect are hereby proposed: the proportional teacher effect and the direct teacher effect.

  5. Study on Analysis of Variance on the indigenous wild and cultivated rice species of Manipur Valley

    Science.gov (United States)

    Medhabati, K.; Rohinikumar, M.; Rajiv Das, K.; Henary, Ch.; Dikash, Th.

    2012-10-01

    The analysis of variance revealed considerable variation among the cultivars and the wild species for yield and other quantitative characters in both the years of investigation. The highly significant differences among the cultivars in year wise and pooled analysis of variance for all the 12 characters reveal that there are enough genetic variabilities for all the characters studied. The existence of genetic variability is of paramount importance for starting a judicious plant breeding programme. Since introduced high yielding rice cultivars usually do not perform well. Improvement of indigenous cultivars is a clear choice for increase of rice production. The genetic variability of 37 rice germplasms in 12 agronomic characters estimated in the present study can be used in breeding programme

  6. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  7. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  8. The analysis of variance in anaesthetic research: statistics, biography and history.

    Science.gov (United States)

    Pandit, J J

    2010-12-01

    Multiple t-tests (or their non-parametric equivalents) are often used erroneously to compare the means of three or more groups in anaesthetic research. Methods for correcting the p value regarded as significant can be applied to take account of multiple testing, but these are somewhat arbitrary and do not avoid several unwieldy calculations. The appropriate method for most such comparisons is the 'analysis of variance' that not only economises on the number of statistical procedures, but also indicates if underlying factors or sub-groups have contributed to any significant results. This article outlines the history, rationale and method of this analysis.

  9. [Analysis of variance of repeated data measured by water maze with SPSS].

    Science.gov (United States)

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (PSPSS statistical package is available to fulfil this process.

  10. Improved analysis of all-sky meteor radar measurements of gravity wave variances and momentum fluxes

    Directory of Open Access Journals (Sweden)

    V. F. Andrioli

    2013-05-01

    Full Text Available The advantages of using a composite day analysis for all-sky interferometric meteor radars when measuring mean winds and tides are widely known. On the other hand, problems arise if this technique is applied to Hocking's (2005 gravity wave analysis for all-sky meteor radars. In this paper we describe how a simple change in the procedure makes it possible to use a composite day in Hocking's analysis. Also, we explain how a modified composite day can be constructed to test its ability to measure gravity wave momentum fluxes. Test results for specified mean, tidal, and gravity wave fields, including tidal amplitudes and gravity wave momentum fluxes varying strongly with altitude and/or time, suggest that the modified composite day allows characterization of monthly mean profiles of the gravity wave momentum fluxes, with good accuracy at least at the altitudes where the meteor counts are large (from 89 to 92.5 km. In the present work we also show that the variances measured with Hocking's method are often contaminated by the tidal fields and suggest a method of empirical correction derived from a simple simulation model. The results presented here greatly increase our confidence because they show that our technique is able to remove the tide-induced false variances from Hocking's analysis.

  11. Methods and applications of linear models regression and the analysis of variance

    CERN Document Server

    Hocking, Ronald R

    2013-01-01

    Praise for the Second Edition"An essential desktop reference book . . . it should definitely be on your bookshelf." -Technometrics A thoroughly updated book, Methods and Applications of Linear Models: Regression and the Analysis of Variance, Third Edition features innovative approaches to understanding and working with models and theory of linear regression. The Third Edition provides readers with the necessary theoretical concepts, which are presented using intuitive ideas rather than complicated proofs, to describe the inference that is appropriate for the methods being discussed. The book

  12. Variance heterogeneity analysis for detection of potentially interacting genetic loci: Method and its limitations

    NARCIS (Netherlands)

    M.V. Struchalin (Maksim); A. Dehghan (Abbas); J.C.M. Witteman (Jacqueline); C.M. van Duijn (Cornelia); Y.S. Aulchenko (Yurii)

    2010-01-01

    textabstractBackground: Presence of interaction between a genotype and certain factor in determination of a trait's value, it is expected that the trait's variance is increased in the group of subjects having this genotype. Thus, test of heterogeneity of variances can be used as a test to screen for

  13. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  14. A comparison of two follow-up analyses after multiple analysis of variance, analysis of variance, and descriptive discriminant analysis: A case study of the program effects on education-abroad programs

    Science.gov (United States)

    Alvin H. Yu; Garry. Chick

    2010-01-01

    This study compared the utility of two different post-hoc tests after detecting significant differences within factors on multiple dependent variables using multivariate analysis of variance (MANOVA). We compared the univariate F test (the Scheffé method) to descriptive discriminant analysis (DDA) using an educational-tour survey of university study-...

  15. Analysis of chaotic and noise processes in a fluctuating blood flow using the Allan variance technique.

    Science.gov (United States)

    Basarab, M A; Basarab, D A; Konnova, N S; Matsievskiy, D D; Matveev, V A

    2016-01-01

    The aim of this work was to develop a novel technique for digital processing of Doppler ultrasound blood flow sensor data from noisy blood flow velocity waveforms. To evaluate the fluctuating blood flow parameters, various nonlinear dynamics methods and algorithms are often being used. Here, for identification of chaotic and noise components in a fluctuating coronary blood flow, for the first time the Allan variance technique was used. Analysis of different types of noises (White, Brownian, Flicker) was carried out and their strong correlation with fractality of time series (the Hurst exponent) was revealed. Based on a specialized software realizing the developed technique, numerical experiments with real clinical data were carried out. Recommendations for identification of noisy patterns of coronary blood flow in normal and pathological states were developed. The methodology gives us the possibility for the more detailed quantitative and qualitative analysis of a noisy fluctuating blood flow data.

  16. A Primer on Multivariate Analysis of Variance (MANOVA for Behavioral Scientists

    Directory of Open Access Journals (Sweden)

    Russell T. Warne

    2014-11-01

    Full Text Available Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012 show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA. However, MANOVA and its associated procedures are often not properly understood, as demonstrated by the fact that few of the MANOVAs published in the scientific literature were accompanied by the correct post hoc procedure, descriptive discriminant analysis (DDA. The purpose of this article is to explain the theory behind and meaning of MANOVA and DDA. I also provide an example of a simple MANOVA with real mental health data from 4,384 adolescents to show how to interpret MANOVA results.

  17. VarMixt: efficient variance modelling for the differential analysis of replicated gene expression data.

    Science.gov (United States)

    Delmar, Paul; Robin, Stéphane; Daudin, Jean Jacques

    2005-02-15

    Identifying differentially regulated genes in experiments comparing two experimental conditions is often a key step in the microarray data analysis process. Many different approaches and methodological developments have been put forward, yet the question remains open. Varmixt is a powerful and efficient novel methodology for this task. It is based on a flexible and realistic variance modelling strategy. It compares favourably with other popular techniques (standard t-test, SAM and Cyber-T). The relevance of the approach is demonstrated with real-world and simulated datasets. The analysis strategy was successfully applied to both a 'two-colour' cDNA microarray and an Affymetrix Genechip. Strong control of false positive and false negative rates is proven in large simulation studies. The R package is freely available at http://www.inapg.inra.fr/ens_rech/mathinfo/recherche/mathematique/outil.html delmar@inapg.inra.fr http://www.inapg.inra.fr/ens_rech/mathinfo/recherche/mathematique/outil.html.

  18. A general maximum likelihood analysis of variance components in generalized linear models.

    Science.gov (United States)

    Aitkin, M

    1999-03-01

    This paper describes an EM algorithm for nonparametric maximum likelihood (ML) estimation in generalized linear models with variance component structure. The algorithm provides an alternative analysis to approximate MQL and PQL analyses (McGilchrist and Aisbett, 1991, Biometrical Journal 33, 131-141; Breslow and Clayton, 1993; Journal of the American Statistical Association 88, 9-25; McGilchrist, 1994, Journal of the Royal Statistical Society, Series B 56, 61-69; Goldstein, 1995, Multilevel Statistical Models) and to GEE analyses (Liang and Zeger, 1986, Biometrika 73, 13-22). The algorithm, first given by Hinde and Wood (1987, in Longitudinal Data Analysis, 110-126), is a generalization of that for random effect models for overdispersion in generalized linear models, described in Aitkin (1996, Statistics and Computing 6, 251-262). The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully nonparametric ML estimation of this distribution. This is of value because the ML estimates of the GLM parameters can be sensitive to the specification of a parametric form for the mixing distribution. The nonparametric analysis can be extended straightforwardly to general random parameter models, with full NPML estimation of the joint distribution of the random parameters. This can produce substantial computational saving compared with full numerical integration over a specified parametric distribution for the random parameters. A simple method is described for obtaining correct standard errors for parameter estimates when using the EM algorithm. Several examples are discussed involving simple variance component and longitudinal models, and small-area estimation.

  19. Measuring self-rated productivity: factor structure and variance component analysis of the Health and Work Questionnaire.

    Science.gov (United States)

    von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne

    2014-12-01

    To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.

  20. A Budget Analysis of the Variances of Temperature and Moisture in Precipitating Shallow Cumulus Convection

    Science.gov (United States)

    Schemann, Vera; Seifert, Axel

    2017-06-01

    Large-eddy simulations of an evolving cloud field are used to investigate the contribution of microphysical processes to the evolution of the variance of total water and liquid water potential temperature in the boundary layer. While the first hours of such simulations show a transient behaviour and have to be analyzed with caution, the final portion of the simulation provides a quasi-equilibrium situation. This allows investigation of the budgets of the variances of total water and liquid water potential temperature and quantification of the contribution of several source and sink terms. Accretion is found to act as a strong sink for the variances, while the contributions from the processes of evaporation and autoconversion are small. A simple parametrization for the sink term connected to accretion is suggested and tested with a different set of simulations.

  1. Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2009-04-20

    Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.

  2. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    Noack, K.

    1982-01-01

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  3. Analysis of the effectiveness of the variance and Downside Risk measures for formation of investment portfolios

    Directory of Open Access Journals (Sweden)

    Mariúcha Nóbrega Bezerra

    2016-09-01

    Full Text Available This paper aims to analyze the efficacy of variance and measures of downside risk for of formation of investment portfolios in the Brazilian stock market. Using the methodologies of Ang (1975, Markowitz et al. (1993, Ballestero (2005, Estrada (2008 and Cumova and Nawrocki (2011, sought to find what the best method to solve the problem of asymmetric and endogenous matrix and, inspired by the work of Markowitz (1952 and Lohre, Neumann and Winterfeldt (2010, intended to be seen which risk metric is most suitable for the realization of more efficient allocation of resources in the stock market in Brazil. The sample was composed of stocks of IBrX 50, from 2000 to 2013. The results indicated that when the semivariance was used as a measure of asymmetric risk, if the investor can use more refined models for solving the problem of asymmetric semivariance-cosemivariance matrix, the model of Cumova and Nawrocki (2011 will be more effective. Furthermore, from the Brazilian data, VaR had become more effective than variance and other measures of downside risk with respect to minimizing the risk of loss. Thus, taken the assumption that the investor has asymmetric preferences regarding risk, forming portfolios of stocks in the Brazilian market is more efficient when using criteria of minimizing downside risk than the traditional mean-variance approach.

  4. Performance of selected imputation techniques for missing variances in meta-analysis

    Science.gov (United States)

    Idris, N. R. N.; Abdullah, M. H.; Tolos, S. M.

    2013-04-01

    A common method of handling the problem of missing variances in meta-analysis of continuous response is through imputation. However, the performance of imputation techniques may be influenced by the type of model utilised. In this article, we examine through a simulation study the effects of the techniques of imputation of the missing SDs and type of models used on the overall meta-analysis estimates. The results suggest that imputation should be adopted to estimate the overall effect size, irrespective of the model used. However, the accuracy of the estimates of the corresponding standard error (SE) is influenced by the imputation techniques. For estimates based on the fixed effects model, mean imputation provides better estimates than multiple imputations, while those based on the random effects model responds more robustly to the type of imputation techniques. The results showed that although imputation is good in reducing the bias in point estimates, it is more likely to produce coverage probability which is higher than the nominal value.

  5. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    Energy Technology Data Exchange (ETDEWEB)

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  6. CAIXA: a catalogue of AGN in the XMM-Newton archive. III. Excess variance analysis

    NARCIS (Netherlands)

    Ponti, G.; Papadakis, I.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.

    2012-01-01

    Context. We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10 ks in pointed observations, which is the largest sample used so far to study

  7. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

    DEFF Research Database (Denmark)

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M

    2014-01-01

    estimates. Volume-based smoothing resulted in large bias and intersubject variance because it smears signal across tissue types. In some cases, PVC with volume smoothing paradoxically caused the estimated BPND to be less than when no PVC was used at all. When applied in the absence of PVC, cortical surface...

  8. Inheritance of dermatoglyphic asymmetry and diversity traits in twins based on factor: variance decomposition analysis.

    Science.gov (United States)

    Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene

    2013-06-01

    Dermatoglyphic asymmetry and diversity traits from a large number of twins (MZ and DZ) were analyzed based on principal factors to evaluate genetic effects and common familial environmental influences on twin data by the use of maximum likelihood-based Variance decomposition analysis. Sample consists of monozygotic (MZ) twins of two sexes (102 male pairs and 138 female pairs) and 120 pairs of dizygotic (DZ) female twins. All asymmetry (DA and FA) and diversity of dermatoglyphic traits were clearly separated into factors. These are perfectly corroborated with the earlier studies in different ethnic populations, which indicate a common biological validity perhaps exists of the underlying component structures of dermatoglyphic characters. Our heritability result in twins clearly showed that DA_F2 is inherited mostly in dominant type (28.0%) and FA_F1 is additive (60.7%), but no significant difference in sexes was observed for these factors. Inheritance is also very prominent in diversity Factor 1, which is exactly corroborated with our previous findings. The present results are similar with the earlier results of finger ridge count diversity in twin data, which suggested that finger ridge count diversity is under genetic control.

  9. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    Science.gov (United States)

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  10. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  11. An analysis of variance of the pubertal and midgrowth spurts for length and width.

    Science.gov (United States)

    Sheehy, A; Gasser, T; Molinari, L; Largo, R H

    1999-01-01

    Using data from the first Zurich Longitudinal Growth Study characteristics of the growth of six variables--bihumeral width, biiliac width, standing height, sitting height, leg height and arm length--are studied. The main interest is in differences between boys and girls, and across variables and in particular in whether there are sex differences that are specific for some variables. For each child and variable, individual velocity and acceleration curves are estimated using a kernal smoother. From these curves, parameters characterizing the midgrowth spurt (MS) and the pubertal spurt (PS) are estimated: timings, durations and intensities. The level of childhood velocity is used for characterizing early growth. These parameters are analysed using a repeated measures analysis of variance (ANOVA) to assess the statistical significance of differences between boys and girls and across variables. This necessitates some kind of standardization and two types of standardization are used here. The MS shows negligible or small differences between boys and girls, and the same is true for velocity in childhood. Differences across variables during the MS are much more pronounced: with respect to intensity, bihumeral width has an MS about six times more intense than height. The PS is later for boys (as is well known), and there are significant differences across variables: bihumeral width and sitting height are late while legs are early. With the exception of biiliac width, the duration of the PS (which has been subdivided into three phases-early, middle and late) is slightly longer for boys for all variables: boys have a longer starting phase, the middle phase is about equal in length for both boys and girls, and girls have a slightly longer late phase. Leg height and height experience a PS of short duration while bihumeral and biiliac width experience a long one and these differences are highly statistically significant. For all variables, with the exception of biiliac width

  12. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  13. Longitudinal Analysis of Residual Feed Intake in Mink using Random Regression with Heterogeneous Residual Variance

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik

    Heritability of residual feed intake (RFI) increased from low to high over the growing period in male and female mink. The lowest heritability for RFI (male: 0.04 ± 0.01 standard deviation (SD); female: 0.05 ± 0.01 SD) was in early and the highest heritability (male: 0.33 ± 0.02; female: 0.34 ± 0.......02 SD) was achieved at the late growth stages. The genetic correlation between different growth stages for RFI showed a high association (0.91 to 0.98) between early and late growing periods. However, phenotypic correlations were lower from 0.29 to 0.50. The residual variances were substantially higher...

  14. The benefit of regional diversification of cogeneration investments in Europe: A mean-variance portfolio analysis

    International Nuclear Information System (INIS)

    Westner, Guenther; Madlener, Reinhard

    2010-01-01

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. - Research highlights: →Preconditions for CHP investments differ significantly between the EU member states. →Regional diversification of CHP investments can reduce the total portfolio risk. →Risk reduction depends on the chosen CHP technology.

  15. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  16. Detecting and accounting for multiple sources of positional variance in peak list registration analysis and spin system grouping.

    Science.gov (United States)

    Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B

    2017-08-01

    Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the

  17. Comparative analysis of mathematical models of the ship from the standpoint of controllability of the variances

    Directory of Open Access Journals (Sweden)

    Pashentsev S. V.

    2017-12-01

    Full Text Available The paper has shown the choice of the mathematical model of the ship with the help of which in the future will be investigated the process of control of the vessel from deviations of two spaced points of the ship's diametric plane from certain lines called aiming. Two mathematical models of the tanker differing structurally are considered: by type and set of differential equations for their description, which are identified parametrically, i. e. the coefficients describing the model equations have been found. To assess the adequacy of the models, their work on the example of the standard maneuver "Zigzag" with the comparative analysis of results between the data and full-scale tests has been tested. The type of maneuver has been chosen on the basis of proximity of the ship characteristic movements to those that occur when the vessel is steered by deviation. Further research has been carried out by performing the managing for deviations in relation to the set of aiming lines. The control quality indicator of quadratic form which assesses the management effectiveness of each model has been introduced. In this calculation case, a mathematical model of the "speed – drift angle – angular speed of rotation" type presented by Japanese engineers has been proposed for the study of complex controls of the tanker of Project 214 on deviations. The model has given estimates of the quality of controls going to "stock". This will allow in further works on the subject on the basis of this mathematical model to obtain results and make the decisions that lead to fewer managerial risks.

  18. The consequence of ignoring a nested factor on measures of effect size in analysis of variance.

    Science.gov (United States)

    Wampold, B E; Serlin, R C

    2000-12-01

    Although the consequences of ignoring a nested factor on decisions to reject the null hypothesis of no treatment effects have been discussed in the literature, typically researchers in applied psychology and education ignore treatment providers (often a nested factor) when comparing the efficacy of treatments. The incorrect analysis, however, not only invalidates tests of hypotheses, but it also overestimates the treatment effect. Formulas were derived and a Monte Carlo study was conducted to estimate the degree to which the F statistic and treatment effect size measures are inflated by ignoring the effects due to providers of treatments. These untoward effects are illustrated with examples from psychotherapeutic treatments.

  19. Analysis of covariance with pre-treatment measurements in randomized trials under the cases that covariances and post-treatment variances differ between groups.

    Science.gov (United States)

    Funatogawa, Takashi; Funatogawa, Ikuko; Shyr, Yu

    2011-05-01

    When primary endpoints of randomized trials are continuous variables, the analysis of covariance (ANCOVA) with pre-treatment measurements as a covariate is often used to compare two treatment groups. In the ANCOVA, equal slopes (coefficients of pre-treatment measurements) and equal residual variances are commonly assumed. However, random allocation guarantees only equal variances of pre-treatment measurements. Unequal covariances and variances of post-treatment measurements indicate unequal slopes and, usually, unequal residual variances. For non-normal data with unequal covariances and variances of post-treatment measurements, it is known that the ANCOVA with equal slopes and equal variances using an ordinary least-squares method provides an asymptotically normal estimator for the treatment effect. However, the asymptotic variance of the estimator differs from the variance estimated from a standard formula, and its property is unclear. Furthermore, the asymptotic properties of the ANCOVA with equal slopes and unequal variances using a generalized least-squares method are unclear. In this paper, we consider non-normal data with unequal covariances and variances of post-treatment measurements, and examine the asymptotic properties of the ANCOVA with equal slopes using the variance estimated from a standard formula. Analytically, we show that the actual type I error rate, thus the coverage, of the ANCOVA with equal variances is asymptotically at a nominal level under equal sample sizes. That of the ANCOVA with unequal variances using a generalized least-squares method is asymptotically at a nominal level, even under unequal sample sizes. In conclusion, the ANCOVA with equal slopes can be asymptotically justified under random allocation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  1. Efficiency control in large-scale genotyping using analysis of variance

    NARCIS (Netherlands)

    Spijker, GT; Bruinenberg, M; Meerman, GJT

    The efficiency of the genotyping process is determined by many simultaneous factors. In actual genotyping, a production run is often preceded by small-scale experiments to find optimal conditions. We propose to use statistical analysis of production run data as well, to gain insight into factors

  2. Statistical Approaches in Analysis of Variance: from Random Arrangements to Latin Square Experimental Design

    OpenAIRE

    Radu E. SESTRAŞ; Lorentz JÄNTSCHI; Sorana D. BOLBOACĂ

    2009-01-01

    Background: The choices of experimental design as well as of statisticalanalysis are of huge importance in field experiments. These are necessary tobe correctly in order to obtain the best possible precision of the results. Therandom arrangements, randomized blocks and Latin square designs werereviewed and analyzed from the statistical perspective of error analysis.Material and Method: Random arrangements, randomized block and Latinsquares experimental designs were used as field experiments. ...

  3. Analysis of the three-dimensional anatomical variance of the distal radius using 3D shape models.

    Science.gov (United States)

    Baumbach, Sebastian F; Binder, Jakob; Synek, Alexander; Mück, Fabian G; Chevalier, Yan; Euler, Ekkehard; Langs, Georg; Fischer, Lukas

    2017-03-09

    Various medical fields rely on detailed anatomical knowledge of the distal radius. Current studies are limited to two-dimensional analysis and biased by varying measurement locations. The aims were to 1) generate 3D shape models of the distal radius and investigate variations in the 3D shape, 2) generate and assess morphometrics in standardized cut planes, and 3) test the model's classification accuracy. The local radiographic database was screened for CT-scans of intact radii. 1) The data sets were segmented and 3D surface models generated. Statistical 3D shape models were computed (overall, gender and side separate) and the 3D shape variation assessed by evaluating the number of modes. 2) Anatomical landmarks were assigned and used to define three standardized cross-sectional cut planes perpendicular to the main axis. Cut planes were generated for the mean shape models and each individual radius. For each cut plane, the following morphometric parameters were calculated and compared: maximum width and depth, perimeter and area. 3) The overall shape model was utilized to evaluate the predictive value (leave one out cross validation) for gender and side identification within the study population. Eighty-six radii (45 left, 44% female, 40 ± 18 years) were included. 1) Overall, side and gender specific statistical 3D models were successfully generated. The first mode explained 37% of the overall variance. Left radii had a higher shape variance (number of modes: 20 female / 23 male) compared to right radii (number of modes: 6 female / 6 male). 2) Standardized cut planes could be defined using anatomical landmarks. All morphometric parameters decreased from distal to proximal. Male radii were larger than female radii with no significant side difference. 3) The overall shape model had a combined median classification probability for side and gender of 80%. Statistical 3D shape models of the distal radius can be generated using clinical CT-data sets. These models

  4. Data analysis and approximate models model choice, location-scale, analysis of variance, nonparametric regression and image analysis

    CERN Document Server

    Davies, Patrick Laurie

    2014-01-01

    Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...

  5. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares

    Energy Technology Data Exchange (ETDEWEB)

    Boccard, Julien, E-mail: julien.boccard@unige.ch; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of

  6. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. VARIANCE ANALYSIS OF WOOL WOVEN FABRICS TENSILE STRENGTH USING ANCOVA MODEL

    Directory of Open Access Journals (Sweden)

    VÎLCU Adrian

    2014-05-01

    Full Text Available The paper has conducted a study on the variation of tensile strength for four woven fabrics made from wool type yarns depending on fiber composition, warp and weft yarns tensile strength and technological density using ANCOVA regression model. In instances where surveyed groups may have a known history of responding to questions differently, rather than using the traditional sharing method to address those differences, analysis of covariance (ANCOVA can be employed. ANCOVA shows the correlation between a dependent variable and the covariate independent variables and removes the variability from the dependent variable that can be accounted by the covariates. The independent and dependent variable structures for Multiple Regression, factorial ANOVA and ANCOVA tests are similar. ANCOVA is differentiated from the other two in that it is used when the researcher wants to neutralize the effect of a continuous independent variable in the experiment. The researcher may simply not be interested in the effect of a given independent variable when performing a study. Another situation where ANCOVA should be applied is when an independent variable has a strong correlation with the dependent variable, but does not interact with other independent variables in predicting the dependent variable’s value. ANCOVA is used to neutralize the effect of the more powerful, non-interacting variable. Without this intervention measure, the effects of interacting independent variables can be clouded

  8. [Spectrum Variance Analysis of Tree Leaves Under the Condition of Different Leaf water Content].

    Science.gov (United States)

    Wu, Jian; Chen, Tai-sheng; Pan, Li-xin

    2015-07-01

    Leaf water content is an important factor affecting tree spectral characteristics. So Exploring the leaf spectral characteristics change rule of the same tree under the condition of different leaf water content and the spectral differences of different tree leaves under the condition of the same leaf water content are not only the keys of hyperspectral vegetation remote sensing information identification but also the theoretical support of research on vegetation spectrum change as the differences in leaf water content. The spectrometer was used to observe six species of tree leaves, and the reflectivity and first order differential spectrum of different leaf water content were obtained. Then, the spectral characteristics of each tree species leaves under the condition of different leaf water content were analyzed, and the spectral differences of different tree species leaves under the condition of the same leaf water content were compared to explore possible bands of the leaf water content identification by hyperspectral remote sensing. Results show that the spectra of each tree leaf have changed a lot with the change of the leaf water content, but the change laws are different. Leaf spectral of different tree species has lager differences in some wavelength range under the condition of same leaf water content, and it provides some possibility for high precision identification of tree species.

  9. Budget variance analysis of a departmentwide implementation of a PACS at a major academic medical center.

    Science.gov (United States)

    Reddy, Arra Suresh; Loh, Shaun; Kane, Robert A

    2006-01-01

    In this study, the costs and cost savings associated with departmentwide implementation of a picture archiving and communication system (PACS) as compared to the projected budget at the time of inception were evaluated. An average of $214,460 was saved each year with a total savings of $1,072,300 from 1999 to 2003, which is significantly less than the $2,943,750 projected savings. This discrepancy can be attributed to four different factors: (1) overexpenditures, (2) insufficient cost savings, (3) unanticipated costs, and (4) project management issues. Although the implementation of PACS leads to cost savings, actual savings will be much lower than expected unless extraordinary care is taken when devising the budget.

  10. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler.

  11. Evaluation of Ensiled Brewer's Grain in the Diet of Piglets by One Way Multiple Analysis of Variance, MANOVA

    Directory of Open Access Journals (Sweden)

    Amang A Mbang, J.

    2007-01-01

    Full Text Available The basic purpose of feeding trials is to find the optimum level of feed ingredients which give the highest economical returns to the farmers. This can be achieved through estimation and comparison of means of different rations. The example we have is a study of incorporation of different levels of ensiled brewers grains in the diet of 24 hybrids weaned piglets from Landrace x Duroc x Berkshire x Large White. They were randomly divided into four groups with three replicates of two piglets per pen. They were fed 0, 10, 20, 30% incorporation of ensiled brewer's grains on dry matter basis during post-weaning period followed by 0, 30, 40 and 50% during growing period and 0, 50, 60 and 70% during finishing period. We have one explanatory variable: initial weight, and four post treatment outcome variables recorded per piglets: final weight, dry matter consumption, weight gain and index of consumption. Comparing of several multivariate treatment means model design analysis is adapted. We obtain the MANOVA (Multiple Analyse of Variance table of each phase, where the treatment differences exist by using Wilk's lambda distribution, and we find the treatment effect by using a confidence interval method of MANOVA. This model has the advantage of computing the responses of all variables in the matrix of sum of squares and more precisely in separation of the different means percentage of Ensiled Brewer's grain.

  12. Contrasting regional architectures of schizophrenia and other complex diseases using fast variance components analysis

    DEFF Research Database (Denmark)

    Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander

    2015-01-01

    Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC...... and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1-Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions...... and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) for several pairs of GERA diseases; genetic correlations were on average 1.3 tunes stronger than the correlations of overall disease liabilities...

  13. A variance analysis of the capacity displaced by wind energy in Europe

    DEFF Research Database (Denmark)

    Giebel, Gregor

    2007-01-01

    Wind energy generation distributed all over Europe is less variable than generation from a single region. To analyse the benefits of distributed generation, the whole electrical generation system of Europe has been modelled including varying penetrations of wind power. The model is chronologically...... simulating the scheduling of the European power plants to cover the demand at every hour of the year. The wind power generation was modelled using wind speed measurements from 60 meteorological stations, for 1 year. The distributed wind power also displaces fossil-fuelled capacity. However, every assessment...... where the pump storage plants are used more aggressively and the other where all German nuclear plants are shut off NCEP/NCAR reanalysis data have been used to recreate the same averaged time series from a data set spanning 34 years. Through this it is possible to set the year studied in detail...

  14. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  15. Combining analysis of variance and three‐way factor analysis methods for studying additive and multiplicative effects in sensory panel data

    DEFF Research Database (Denmark)

    Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun

    2015-01-01

    in the use of the scale with reference to the existing structure of relationships between sensory descriptors. The multivariate assessor model will be tested on a data set from milk. Relations between the proposed model and other multiplicative models like parallel factor analysis and analysis of variance......Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...

  16. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  17. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean-variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums....... Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...

  18. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  19. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  20. Variances in family carers' quality of life based on selected relationship and caregiving indicators: A quantitative secondary analysis.

    Science.gov (United States)

    Naef, Rahel; Hediger, Hannele; Imhof, Lorenz; Mahrer-Imhof, Romy

    2017-06-01

    To determine subgroups of family carers based on family relational and caregiving variables and to explore group differences in relation to selected carer outcomes. Family caregiving in later life holds a myriad of positive and negative outcomes for family members' well-being. However, factors that constitute family carers' experience and explain variances are less well understood. A secondary data analysis using cross-sectional data from a controlled randomised trial with community-dwelling people 80 years or older and their families. A total of 277 paired data sets of older persons and their family carers were included into the analysis. Data were collected via mailed questionnaires and a geriatric nursing assessment. A two-step cluster analysis was performed to determine subgroups. To discern group differences, appropriate tests for differences with Bonferroni correction were used. Two family carer groups were identified. The low-intensity caregiver group (57% of carers) reported high relationship quality and self-perceived ease of caregiving. In contrast, the high-intensity caregiver group (43% of carers) experienced significantly lower relationship quality, felt less prepared and appraised caregiving as more difficult, time intensive and burdensome. The latter cared for older, frailer and more dependent octogenarians and had significantly lower levels of quality of life and self-perceived health compared to the low-intensity caregiver group. A combination of family relational and caregiving variables differentiates those at risk for adverse outcomes. Family carers of frailer older people tend to experience higher strain, lower relationship quality and ability to work together as a family. Nurses should explicitly assess family carer needs, in particular when older persons are frail. Family carer support interventions should address caregiving preparedness, demand and burden, as well as concerns situated in the relationship. © 2016 John Wiley & Sons Ltd.

  1. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is revoking twenty-four (24) obsolete variances...

  2. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...

  3. Power generation mixes evaluation applying the mean-variance theory. Analysis of the choices for Japanese energy policy

    International Nuclear Information System (INIS)

    Tabaru, Yasuhiko; Nonaka, Yuzuru; Nonaka, Shunsuke; Endou, Misao

    2013-01-01

    Optimal Japanese power generation mixes in 2030, for both economic efficiency and energy security (less cost variance risk), are evaluated by applying the mean-variance portfolio theory. Technical assumptions, including remaining generation capacity out of the present generation mix, future load duration curve, and Research and Development risks for some renewable energy technologies in 2030, are taken into consideration as either the constraints or parameters for the evaluation. Efficiency frontiers, which consist of the optimal generation mixes for several future scenarios, are identified, taking not only power balance but also capacity balance into account, and are compared with three power generation mixes submitted by the Japanese government as 'the choices for energy and environment'. (author)

  4. The variance of two game tree algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yanjun [Southern Methodist Univ., Dallas, TX (United States)

    1997-06-01

    This paper studies the variance of two game tree algorithms {alpha}-{beta} search and SCOUT, in the stochastic i.i.d. model. The problem of determining the variance of the classic {alpha}-{beta} search algorithm in the i.i.d. model has been long open. This paper resolves this problem partially. It is shown, by the martingale method, that the standard deviation of the weaker {alpha}-{beta} search without deep cutoffs is of the same order as the expected number of leaves evaluated. A nearly-optimal upper bound on the variance of the general {alpha}-{beta} search is obtained, and this upper bound yields an optimal bound if the current upper bound on the expected number of leaves evaluated by {alpha}-{beta} search can be improved. A thorough treatment of the two-pass SCOUT algorithm is presented. The variance of the SCOUT algorithm is determined.

  5. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  6. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  7. State-Space Analysis of Model Error: A Probabilistic Parameter Estimation Framework with Spatial Analysis of Variance

    Science.gov (United States)

    2012-09-30

    atmospheric models and the chaotic growth of initial-condition (IC) error. The aim of our work is to provide new methods that begin to systematically disentangle the model inadequacy signal from the initial condition error signal.

  8. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  9. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2009-11-01

    Full Text Available Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3/(4pisigma(2 seconds where mu is the mean period of an oscillator in seconds and sigma(2 its variance in seconds(2. We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

  10. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  11. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  12. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean–variance approach

    International Nuclear Information System (INIS)

    Kitzing, Lena

    2014-01-01

    Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean–variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums. Using cash flow analysis, Monte Carlo simulations and mean–variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feed-in tariffs systematically require lower direct support levels than feed-in premiums while providing the same attractiveness for investment, because they expose investors to less market risk. These risk implications should be considered when designing policy schemes. - Highlights: • Mean–variance portfolio approach to analyse risk implications of policy instruments. • We show that feed-in tariffs require lower support levels than feed-in premiums. • This systematic effect stems from the lower exposure of investors to market risk. • We created a stochastic model for an exemplary offshore wind park in West Denmark. • We quantify risk-return, Sharpe Ratios and differences in required support levels

  13. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

  14. Linear transformations of variance/covariance matrices

    NARCIS (Netherlands)

    Parois, P.J.A.; Lutz, M.

    2011-01-01

    Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance

  15. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  16. An Analysis of the Factors Generating the Variance Between the Budgeted and Actual Operating Results of the Naval Aviation Depot at North Island, California

    National Research Council Canada - National Science Library

    Curran, Thomas; Schimpff, Joshua J

    2008-01-01

    .... The variance analysis between budgeted (projected) and actual financial results was performed on financial data collected on the E-2C aircraft program from Fleet Readiness Center Southwest (FRCSW...

  17. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers ...

  18. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  19. Constituents with independence from growth temperature for bacteria using pyrolysis-gas chromatography/differential mobility spectrometry with analysis of variance and principal component analysis.

    Science.gov (United States)

    Prasad, Satendra; Pierce, Karisa M; Schmidt, Hartwig; Rao, Jaya V; Güth, Robert; Synovec, Robert E; Smith, Geoffrey B; Eiceman, Gary A

    2008-06-01

    Four bacteria, Escherichia coli, Pseudomonas aeruginosa, Staphylococcus warneri, and Micrococcus luteus, were grown at temperatures of 23, 30, and 37 degrees C and were characterized by pyrolysis-gas chromatography/differential mobility spectrometry (Py-GC/DMS) providing, with replicates, 120 data sets of retention time, compensation voltage, and ion intensity, each for negative and positive polarity. Principal component analysis (PCA) for 96 of these data sets exhibited clusters by temperature of culture growth and not by genus. Analysis of variance was used to isolate the constituents with dependences on growth temperature. When these were subtracted from the data sets, Fisher ratios with PCA resulted in four clusters according to genus at all temperatures for ions in each polarity. Comparable results were obtained from unsupervised PCA with 24 of the original data sets. The ions with taxonomic features were reconstructed into 3D plots of retention time, compensation voltage, and Fisher ratio and were matched, through GC-mass spectrometry (MS), with chemical standards attributed to the thermal decomposition of proteins and lipid A. Results for negative ions provided simpler data sets than from positive ions, as anticipated from selectivity of gas phase ion-molecule reactions in air at ambient pressure.

  20. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  1. Analysis of ambulatory blood pressure monitor data using a hierarchical model incorporating restricted cubic splines and heterogeneous within-subject variances.

    Science.gov (United States)

    Lambert, P C; Abrams, K R; Jones, D R; Halligan, A W; Shennan, A

    2001-12-30

    Hypertensive disorders of pregnancy are associated with significant maternal and foetal morbidity. Measurement of blood pressure remains the standard way of identifying individuals at risk. There is growing interest in the use of ambulatory blood pressure monitors (ABPM), which can record an individual's blood pressure many times over a 24-hour period. From a clinical perspective interest lies in the shape of the blood pressure profile over a 24-hour period and any differences in the profile between groups. We propose a two-level hierarchical linear model incorporating all ABPM data into a single model. We contrast a classical approach with a Bayesian approach using the results of a study of 206 pregnant women who were asked to wear an ABPM for 24 hours after referral to an obstetric day unit with high blood pressure. As the main interest lies in the shape of the profile, we use restricted cubic splines to model the mean profiles. The use of restricted cubic splines provides a flexible way to model the mean profiles and to make comparisons between groups. From examining the data and the fit of the model it is apparent that there were heterogeneous within-subject variances in that some women tend to have more variable blood pressure than others. Within the Bayesian framework it is relatively easy to incorporate a random effect to model the between-subject variation in the within-subject variances. Although there is substantial heterogeneity in the within-subject variances, allowing for this in the model has surprisingly little impact on the estimates of the mean profiles or their confidence/credible intervals. We thus demonstrate a powerful method for analysis of ABPM data and also demonstrate how heterogeneous within-subject variances can be modelled from a Bayesian perspective. Copyright 2001 John Wiley & Sons, Ltd.

  2. Comparison of a one-at-a-step and variance-based global sensitivity analysis applied to a parsimonious urban hydrological model

    Science.gov (United States)

    Coutu, S.

    2014-12-01

    A sensitivity analysis was conducted on an existing parsimonious model aiming to reproduce flow in engineered urban catchments and sewer networks. The model is characterized by his parsimonious feature and is limited to seven calibration parameters. The objective of this study is to demonstrate how different levels of sensitivity analysis can have an influence on the interpretation of input parameter relevance in urban hydrology, even for light structure models. In this perspective, we applied a one-at-a-time (OAT) sensitivity analysis (SA) as well as a variance-based global and model independent method; the calculation of Sobol indexes. Sobol's first and total effect indexes were estimated using a Monte-Carlo approach. We present evidences of the irrelevance of calculating Sobol's second order indexes when uncertainty on index estimation is too high. Sobol's method results showed that two parameters drive model performance: the subsurface discharge rate and the root zone drainage coefficient (Clapp exponent). Interestingly, the surface discharge rate responsible flow in impervious area has no significant relevance, contrarily to what was expected considering only the one-at-a-time sensitivity analysis. This last statement is clearly not straightforward. It highlights the utility of carrying variance-based sensitivity analysis in the domain of urban hydrology, even when using a parsimonious model, in order to prevent misunderstandings in the system dynamics and consequent management mistakes.

  3. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    Science.gov (United States)

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  4. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    Science.gov (United States)

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations.

  5. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  6. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A

    2007-01-01

    been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...

  7. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additi...... or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of Mat´ern covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

  8. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  9. Analysis of Variance, Heritability, Correlation and Selection Character of M1 V3 Generation Cassava (Manihot esculenta Crantz Mutants

    Directory of Open Access Journals (Sweden)

    Rahmi Henda Yani

    2018-02-01

    Full Text Available Information about genetic variability and correlation between qualitative character and yield are important to support a selection program. The objective of this research was to determine genetic variability, heritability, and path analysis of M1 V3 cassava mutants’ characters. This research was conducted at Bogor Agricultural University Experimental Field Research from May 2014 to May 2015. This research used 32 mutants from five cassava parent lines which were Malang-4 and Adira-4 (national varieties, UJ-5 (Introduction variety from Thailand, and two local genotypes from Halmahera which were Jame-jame and Ratim. The results showed that gamma ray irradiation increased variability from five cassava genotypes. Characters that had high heritability were length of leaf lobe, lengthof petiole, stem diameter, and the height of plant. The path correlation analysis showed that number of tubers, number of economic tuber (> 20 cm, height to first branchingand stem diameter had direct correlation with tuber mass per plant. The characters can be used for the selection of M1 V4 generation.

  10. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    Science.gov (United States)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  11. Comparative and variance analysis of activity of glutation dependent enzymes in loach Misgurnus fossilis L. embryos under the influence of flurenizyd

    Directory of Open Access Journals (Sweden)

    Н. О. Боднарчук

    2016-07-01

    Full Text Available The aim of work was to study the influence of flurenizyd (antimicrobial, antituberculosis, antichlamydia, immunomodulatory, antioxidant, hepatoprotective, anti-inflammatory medicine on the antioxidant homoeostasis of loach embryos (Misgurnus fossilis L. during early embryogenesis. Glutathioneperoxidase and glutathione-S-transferase activity was studied under the action of flurenizyd in concentrations 0.01; 0.05; 0.15; 1; 5; 15 mM, at the embryos of loach on the stage of development of the first (2 blastodmeres, fourth (16 blastodmeres, sixth (64 blastodmeres, eighth (256 blastodmeres, tenth (1024 blastodmeres division of zygote (before the stage of de-synchronization. The two-factor analysis of variance was conducted to figure out the degree of flurenizyd influence, time of development and miscellaneous factors on activity of the enzymes of cell’s glutathione antioxidant defence system. It was shown that flurenizyd violates work of glutathioneperoxidase on all stages of development of loach embryos  in particular predetermines the increase of their activity on the stage of development of 2, 16 and 256 blastomeres. The investigated antibiotic violates work of glutathione-S-transferase in the process of early embryogenesis of loach embryos Misgurnus fossilis L. In a maximal concentration (15 мМ flurenizyd decreases activity of enzyme, starting from the initial stages of development of loach embryos. Two-factor analysis of variance indicated, that on glutathioneperoxidase and glutathione-S-transferase activity of loach embryos considerable influence is made by the miscellaneous factors, like external factors affecting the development of embryos. Time of development, in a greater degree, influences the work of glutathioneperoxidase during early embryogenesis, compared to flurenizyd that indicates indirect influence of the investigated antibiotic on glutathioneperoxidase. Considerable influence of flurenizyd on glutathione-S-transferase activity

  12. Semi-empirical prediction of moisture build-up in an electronic enclosure using analysis of variance (ANOVA)

    DEFF Research Database (Denmark)

    Shojaee Nasirabadi, Parizad; Conseil, Helene; Mohanty, Sankhya

    2016-01-01

    Electronic systems are exposed to harsh environmental conditions such as high humidity in many applications. Moisture transfer into electronic enclosures and condensation can cause several problems as material degradation and corrosion. Therefore, it is important to control the moisture content...... and the relative humidity inside electronic enclosures. In this work, moisture transfer into a typical polycarbonate electronic enclosure with a cylindrical shape opening is studied. The effects of four influential parameters namely, initial relative humidity inside the enclosure, radius and length of the opening...... and temperature are studied. A set of experiments are done based on a fractional factorial design in order to estimate the time constant for moisture transfer into the enclosure by fitting the experimental data to an analytical quasi-steady-state model. According to the statistical analysis, temperature...

  13. Advanced methods of analysis variance on scenarios of nuclear prospective; Metodos avanzados de analisis de varianza en escenarios de prospectiva nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-07-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  14. Cavitation Erosion Behavior of Electroless Ni-P Coating and Optimization of Process Parameter Using Analysis of Variance with Orthogonal Array.

    Science.gov (United States)

    Park, Il-Cho; Kim, Seong-Jong

    2018-03-01

    This study investigated the cavitation erosion resistance of electroless Ni-P (EN) coated gray cast iron (GCI) in seawater solution. Furthermore, the optimum coating design parameters were examined to minimize cavitation erosion damage through analysis of variance (ANOVA) based on the L9 orthogonal array. In this study, four coating design factors were used: concentration of source of nickel (A), concentration of reducer agent (B), deposition temperature (C), and pressure of shot peening (D). In accordance with the regulation of the modified ASTM G32, the cavitation erosion experiment was conducted for 1 hour in a seawater solution to find the optimum design parameters which can minimize the cavitation erosion damage. Besides, ANOVA was performed to verify the contribution of each coating design parameter. As a result, the concentration of reducer agent among the EN process parameters was determined as the most significant factor in the cavitation erosion behavior.

  15. An alternative method for noise analysis using pixel variance as part of quality control procedures on digital mammography systems

    International Nuclear Information System (INIS)

    Bouwman, R; Broeders, M; Van Engen, R; Young, K; Lazzari, B; Ravaglia, V

    2009-01-01

    According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does not discriminate sufficiently between systems with and without additional noise besides quantum noise. This paper attempts to give an alternative and relatively simple method for noise analysis which can divide noise into electronic noise, structured noise and quantum noise. Quantum noise needs to be the dominant noise source in clinical images for optimal performance of a digital mammography system, and therefore the amount of electronic and structured noise should be minimal. For several digital mammography systems, the noise was separated into components based on the measured pixel value, standard deviation (SD) of the image and the detector entrance dose. The results showed that differences between systems exist. Our findings confirm that the proposed method is able to discriminate systems based on their noise performance and is able to detect possible quality problems. Therefore, we suggest to replace the current method for noise analysis as described in the European Guidelines by the alternative method described in this paper.

  16. Molecular variance of the Tunisian almond germplasm assessed by ...

    African Journals Online (AJOL)

    The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...

  17. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  18. Estimation of the additive and dominance variances in South African ...

    African Journals Online (AJOL)

    The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate ...

  19. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  20. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  1. Assessment of heterogeneity of residual variances using changepoint techniques

    Directory of Open Access Journals (Sweden)

    Toro Miguel A

    2000-07-01

    Full Text Available Abstract Several studies using test-day models show clear heterogeneity of residual variance along lactation. A changepoint technique to account for this heterogeneity is proposed. The data set included 100 744 test-day records of 10 869 Holstein-Friesian cows from northern Spain. A three-stage hierarchical model using the Wood lactation function was employed. Two unknown changepoints at times T1 and T2, (0 T1 T2 tmax, with continuity of residual variance at these points, were assumed. Also, a nonlinear relationship between residual variance and the number of days of milking t was postulated. The residual variance at a time t( in the lactation phase i was modeled as: for (i = 1, 2, 3, where λι is a phase-specific parameter. A Bayesian analysis using Gibbs sampling and the Metropolis-Hastings algorithm for marginalization was implemented. After a burn-in of 20 000 iterations, 40 000 samples were drawn to estimate posterior features. The posterior modes of T1, T2, λ1, λ2, λ3, , , were 53.2 and 248.2 days; 0.575, -0.406, 0.797 and 0.702, 34.63 and 0.0455 kg2, respectively. The residual variance predicted using these point estimates were 2.64, 6.88, 3.59 and 4.35 kg2 at days of milking 10, 53, 248 and 305, respectively. This technique requires less restrictive assumptions and the model has fewer parameters than other methods proposed to account for the heterogeneity of residual variance during lactation.

  2. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    International Nuclear Information System (INIS)

    Conte, Elio; Federici, Antonio; Zbilut, Joseph P.

    2009-01-01

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  3. Further results on variances of local stereological estimators

    DEFF Research Database (Denmark)

    Pawlas, Zbynek; Jensen, Eva B. Vedel

    2006-01-01

    In the present paper the statistical properties of local stereological estimators of particle volume are studied. It is shown that the variance of the estimators can be decomposed into the variance due to the local stereological estimation procedure and the variance due to the variability...... in the particle population. It turns out that these two variance components can be estimated separately, from sectional data. We present further results on the variances that can be used to determine the variance by numerical integration for particular choices of particle shapes....

  4. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  5. The use of analysis of variance to evaluate the influence of two factors - clay and radionuclide- in the sorption coefficients of Freundlich

    International Nuclear Information System (INIS)

    Freire, Carolina B.; Tello, Cledola Cassia Oliveira de

    2009-01-01

    A large number of waste radioactive disposal operators use engineered barrier system in near surface and deep repository for the protection of humans and of the environmental from the potential hazards associated with this kind of waste. Clays are often considered as buffer and backfilled materials in the multi barrier concept of both high-level and low/intermediate-level radioactive waste repository. Several studies showed that this material present high sorption and exchange cationic capacity, but is important to evaluate if the sorption coefficients is influenced by kind of clay and/or by kind of radionuclide. Therefore, the objective of this research was to evaluate if this influence exist, considering clay and radionuclide like two factors and the sorption coefficients like response, determined by Freundlich Model, through of application of a statistical analysis known like Analysis of Variance for a two-factor model with one observation per cell. In this design of this experiment were analyzed four different clays (two bentonites, one kaolinite and one vermiculite) and two radionuclides (cesium and strontium). The statistical test for nonadditivity showed that there is no evidence of interaction between this two factors and only the kind of clay have a significant effect in the sorption coefficient. (author)

  6. Monte-Carlo analysis of rarefied-gas diffusion including variance reduction using the theory of Markov random walks

    Science.gov (United States)

    Perlmutter, M.

    1973-01-01

    Molecular diffusion through a rarefied gas is analyzed by using the theory of Markov random walks. The Markov walk is simulated on the computer by using random numbers to find the new states from the appropriate transition probabilities. As the sample molecule during its random walk passes a scoring position, which is a location at which the macroscopic diffusing flow variables such as molecular flux and molecular density are desired, an appropriate payoff is scored. The payoff is a function of the sample molecule velocity. For example, in obtaining the molecular flux across a scoring position, the random walk payoff is the net number of times the scoring position has been crossed in the positive direction. Similarly, when the molecular density is required, the payoff is the sum of the inverse velocity of the sample molecule passing the scoring position. The macroscopic diffusing flow variables are then found from the expected payoff of the random walks.

  7. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  8. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  9. Analysis of variance, normal quantile-quantile correlation and effective expression support of pooled expression ratio of reference genes for defining expression stability

    Directory of Open Access Journals (Sweden)

    Himanshu Priyadarshi

    2017-01-01

    Full Text Available Identification of a reference gene unaffected by the experimental conditions is obligatory for accurate measurement of gene expression through relative quantification. Most existing methods directly analyze variability in crossing point (Cp values of reference genes and fail to account for template-independent factors that affect Cp values in their estimates. We describe the use of three simple statistical methods namely analysis of variance (ANOVA, normal quantile-quantile correlation (NQQC and effective expression support (EES, on pooled expression ratios of reference genes in a panel to overcome this issue. The pooling of expression ratios across the genes in the panel nullify the sample specific effects uniformly affecting all genes that are falsely reflected as instability. Our methods also offer the flexibility to include sample specific PCR efficiencies in estimations, when available, for improved accuracy. Additionally, we describe a correction factor from the ANOVA method to correct the relative fold change of a target gene if no truly stable reference gene could be found in the analyzed panel. The analysis is described on a synthetic data set to simplify the explanation of the statistical treatment of data.

  10. Variance component and heritability estimates of early growth traits ...

    African Journals Online (AJOL)

    Variance component and heritability estimates of early growth traits in the Elsenburg Dormer sheep ... of variance and co- variance components. In recent years, heritability estimates of growth traits have been reported for many breeds of sheep. However, little information ..... Modeling genetic evaluation systems. Project no.

  11. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    and realized variances, our model allows to infer the occurrence and size of extreme variance events, and construct indicators signalling agents sentiment towards future market conditions. Our results show that excess returns are to a large extent explained by fear or optimism towards future extreme variance...

  12. Multivariate Analysis of Variance: Finding significant growth in mice with craniofacial dysmorphology caused by the Crouzon mutation

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Ólafsdóttir, Hildur; Darvann, Tron Andre

    2010-01-01

    to the human counterpart. Quantifying growth in the Crouzon mouse model could test hypotheses of the relationship between craniosynostosis and dysmorphology, leading to better understanding of the causes of Crouzon syndrome as well as providing knowledge relevant for surgery planning. In the present study we...

  13. Variance component analysis of quantitative trait loci for pork carcass composition and meat quality on SSC4 and SSC11

    NARCIS (Netherlands)

    Wijk, van H.J.; Dibbits, B.W.; Liefers, S.C.; Buschbell, H.; Harlizius, B.; Heuven, H.C.M.; Knol, E.F.; Bovenhuis, H.; Groenen, M.A.M.

    2007-01-01

    In a previous study, QTL for carcass composition and meat quality were identified in a commercial finisher cross. The main objective of the current study was to confirm and fine map the QTL on SSC4 and SSC11 by genotyping an increased number of individuals and markers and to analyze the data using a

  14. Variance-component analysis of obesity in Type 2 Diabetes confirms loci on chromosomes 1q and 11q

    NARCIS (Netherlands)

    Haeften, T.W. van; Pearson, P.L.; Tilburg, J.H.O. van; Strengman, E.; Sandkuijl, L.A.; Wijmenga, C.

    2003-01-01

    To study genetic loci influencing obesity in nuclear families with type 2 diabetes, we performed a genome-wide screen with 325 microsatellite markers that had an average spacing of 11 cM and a mean heterozygosity of ~75% covering all 22 autosomes. Genotype data were obtained from 562

  15. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  16. Method of median semi-variance for the analysis of left-censored data: comparison with other techniques using environmental data.

    Science.gov (United States)

    Zoffoli, Hugo José Oliveira; Varella, Carlos Alberto Alves; do Amaral-Sobrinho, Nelson Moura Brasil; Zonta, Everaldo; Tolón-Becerra, Alfredo

    2013-11-01

    In environmental monitoring, variables with analytically non-detected values are commonly encountered. For the statistical evaluation of these data, most of the methods that produce a less biased performance require specific computer programs. In this paper, a statistical method based on the median semi-variance (SemiV) is proposed to estimate the position and spread statistics in a dataset with single left-censoring. The performances of the SemiV method and 12 other statistical methods are evaluated using real and complete datasets. The performances of all the methods are influenced by the percentage of censored data. In general, the simple substitution and deletion methods showed biased performance, with exceptions for L/2, Inter and L/√2 methods that can be used with caution under specific conditions. In general, the SemiV method and other parametric methods showed similar performances and were less biased than other methods. The SemiV method is a simple and accurate procedure that can be used in the analysis of datasets with less than 50% of left-censored data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Least-Squares Analysis of Phosphorus Soil Sorption Data with Weighting from Variance Function Estimation: A Statistical Case for the Freundlich Isotherm

    Science.gov (United States)

    Phosphorus sorption data for soil of the Pembroke classification are recorded at high replication — 10 experiments at each of 7 initial concentrations — for characterizing the data error structure through variance function estimation. The results permit the assignment of reliable weights for the su...

  18. ON THE VARIANCE OF LOCAL STEREOLOGICAL VOLUME ESTIMATORS

    Directory of Open Access Journals (Sweden)

    Eva B Vedel Jensen

    2011-05-01

    Full Text Available In the present paper, the variance of local stereological volume estimators is studied. For isotropic designs, the variance depends on the shape of the body under study and the choice of reference point. It can be expressed in terms of an equivalent star body. For a collection of triaxial ellipsoids the variance is determined by simulation. The problem of estimating particle size distributions from central sections through the particles is also discussed.

  19. Parametric study and global sensitivity analysis for co-pyrolysis of rape straw and waste tire via variance-based decomposition.

    Science.gov (United States)

    Xu, Li; Jiang, Yong; Qiu, Rong

    2018-01-01

    In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  1. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  2. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  3. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  4. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  5. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  6. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  7. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  8. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  9. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  10. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  11. An optimization iterative algorithm based on nonnegative constraint with application to Allan variance analysis technique

    Science.gov (United States)

    Lv, Hanfeng; Zhang, Liang; Wang, Dingjie; Wu, Jie

    2014-03-01

    It is well known that inertial integrated navigation systems can provide accurate navigation information. In these systems, inertial sensor random error often becomes the limiting factor to get a better performance. So it is imperative to have accurate characterization of the random error. Allan variance analysis technique has a good performance in analyzing inertial sensor random error, and it is always used to characterize various types of the random error terms. This paper proposes a new method named optimization iterative algorithm based on nonnegative constraint applied to Allan variance analysis technique to estimate parameters of the random error terms. The parameter estimates by this method are nonnegative and optimal, and the estimation process does not have matrix nearly singular issues. Testing with simulation data and the experimental data of a fiber optical gyro, the parameters estimated by the presented method are compared against other excellent methods with good agreement; moreover, the objective function has the minimum value.

  12. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  13. Evaluation of the mineralogical characterization of several smectite clay deposits of the state of Paraiba, Brazil using statistical analysis of variance; Avaliacao da caracterizacao mineralogica de diversos depositos de argilas esmectiticas do estado da Paraiba utilizando analise estatistica de variancia

    Energy Technology Data Exchange (ETDEWEB)

    Gama, A.J.A.; Menezes, R.R.; Neves, G.A.; Brito, A.L.F. de, E-mail: agama@reitoria.ufcg.edu.br [Universidade Federal de Campina Grande (UFCG), PB (Brazil)

    2015-07-01

    Currently over 80% of industrialized bentonite clay produced in Brazil in sodium form for use in various industrial applications come from the deposits in Boa Vista - PB. Recently they were discovered new bentonite deposits situated in the municipalities of Cubati - PB, Drawn Stone - PB, Sossego - PB, and last in olive groves - PB, requiring systematic studies to develop all its industrial potential. Therefore, this study aimed to evaluate chemical characterization several deposits of smectite clays from various regions of the state of Paraíba through the analysis of statistical variance. Chemical analysis form determined by fluorescence x-ray (EDX). Then analyzes were carried out of variance statistics and Tukey test using the statistical soft MINITAB® 17.0. The results showed that the chemical composition of bentonite clay of new deposits showed different amounts of silica, aluminum, magnesium and calcium in relation clays in Boa Vista, and clays imported. (author)

  14. Meta-analysis without study-specific variance information: Heterogeneity case.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Niwitpong, Sa-Aat; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-01-01

    The random effects model in meta-analysis is a standard statistical tool often used to analyze the effect sizes of the quantity of interest if there is heterogeneity between studies. In the special case considered here, meta-analytic data contain only the sample means in two treatment arms and the sample sizes, but no sample standard deviation. The statistical comparison between two arms for this case is not possible within the existing meta-analytic inference framework. Therefore, the main objective of this paper is to estimate the overall mean difference and associated variances, the between-study variance and the within-study variance, as specified as the important elements in the random effects model. These estimators are obtained using maximum likelihood estimation. The standard errors of the estimators and a quantification of the degree of heterogeneity are also investigated. A measure of heterogeneity is suggested which adjusts the original suggested measure of Higgins' I 2 for within study sample size. The performance of the proposed estimators is evaluated using simulations. It can be concluded that all estimated means converged to their associated true parameter values, and its standard errors tended to be small if the number of the studies involved in the meta-analysis was large. The proposed estimators could be favorably applied in a meta-analysis on comparing two surgeries for asymptomatic congenital lung malformations in young children.

  15. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  16. Analysis of variance of primary data on plant growth analysis Análise de variância dos dados primários na análise de crescimento vegetal

    Directory of Open Access Journals (Sweden)

    Adelson Paulo Araújo

    2003-01-01

    Full Text Available Plant growth analysis presents difficulties related to statistical comparison of growth rates, and the analysis of variance of primary data could guide the interpretation of results. The objective of this work was to evaluate the analysis of variance of data from distinct harvests of an experiment, focusing especially on the homogeneity of variances and the choice of an adequate ANOVA model. Data from five experiments covering different crops and growth conditions were used. From the total number of variables, 19% were originally homoscedastic, 60% became homoscedastic after logarithmic transformation, and 21% remained heteroscedastic after transformation. Data transformation did not affect the F test in one experiment, whereas in the other experiments transformation modified the F test usually reducing the number of significant effects. Even when transformation has not altered the F test, mean comparisons led to divergent interpretations. The mixed ANOVA model, considering harvest as a random effect, reduced the number of significant effects of every factor which had the F test modified by this model. Examples illustrated that analysis of variance of primary variables provides a tool for identifying significant differences in growth rates. The analysis of variance imposes restrictions to experimental design thereby eliminating some advantages of the functional growth analysis.A análise de crescimento vegetal apresenta dificuldades relacionadas à comparação estatística das curvas de crescimento, e a análise de variância dos dados primários pode orientar a interpretação dos resultados. Este trabalho objetivou avaliar a análise de variância de dados de distintas coletas de um experimento, abordando particularmente a homogeneidade das variâncias e a escolha do modelo adequado de ANOVA. Foram utilizados dados de cinco experimentos com diferentes culturas e condições de crescimento. Do total de variáveis, 19% foram originalmente

  17. Variance component and breeding value estimation for genetic heterogeneity of residual variance in Swedish Holstein dairy cattle

    NARCIS (Netherlands)

    Rönnegård, L.; Felleki, M.; Fikse, W.F.; Mulder, H.A.; Strandberg, E.

    2013-01-01

    Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this

  18. A new definition of nonlinear statistics mean and variance

    OpenAIRE

    Chen, W.

    1999-01-01

    This note presents a new definition of nonlinear statistics mean and variance to simplify the nonlinear statistics computations. These concepts aim to provide a theoretical explanation of a novel nonlinear weighted residual methodology presented recently by the present author.

  19. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  20. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of Matérn covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

  1. Time Variance of the Suspension Nonlinearity

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.; Pedersen, Bo Rohde

    2008-01-01

    but recovers quickly. The the high power and long term measurements affect the non-linearity of the speaker, by incresing the compliance value for all values of displacement. This level dependency is validated with distortion measurements and it is demonstrated how improved accuracy of the non-linear model can...

  2. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that ...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time.......This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...

  3. Partitioning of genomic variance using biological pathways

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per

    and that these variants are enriched for genes that are connected in biological pathways or for likely functional effects on genes. These biological findings provide valuable insight for developing better genomic models. These are statistical models for predicting complex trait phenotypes on the basis of SNP...... action of multiple SNPs in genes, biological pathways or other external findings on the trait phenotype. As proof of concept we have tested the modelling framework on several traits in dairy cattle....

  4. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  5. Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues

    International Nuclear Information System (INIS)

    Yang, M; Zhu, X R; Mohan, R; Dong, L; Virshup, G; Clayton, J

    2010-01-01

    We discovered an empirical relationship between the logarithm of mean excitation energy (ln I m ) and the effective atomic number (EAN) of human tissues, which allows for computing patient-specific proton stopping power ratios (SPRs) using dual-energy CT (DECT) imaging. The accuracy of the DECT method was evaluated for 'standard' human tissues as well as their variance. The DECT method was compared to the existing standard clinical practice-a procedure introduced by Schneider et al at the Paul Scherrer Institute (the stoichiometric calibration method). In this simulation study, SPRs were derived from calculated CT numbers of known material compositions, rather than from measurement. For standard human tissues, both methods achieved good accuracy with the root-mean-square (RMS) error well below 1%. For human tissues with small perturbations from standard human tissue compositions, the DECT method was shown to be less sensitive than the stoichiometric calibration method. The RMS error remained below 1% for most cases using the DECT method, which implies that the DECT method might be more suitable for measuring patient-specific tissue compositions to improve the accuracy of treatment planning for charged particle therapy. In this study, the effects of CT imaging artifacts due to the beam hardening effect, scatter, noise, patient movement, etc were not analyzed. The true potential of the DECT method achieved in theoretical conditions may not be fully achievable in clinical settings. Further research and development may be needed to take advantage of the DECT method to characterize individual human tissues.

  6. Signal filtering and parameter variance analysis from flash electroretinogram

    Directory of Open Access Journals (Sweden)

    Yan-Hong Fang

    2014-11-01

    Full Text Available AIM: To accurately measure the implicit time and amplitude of a-, b-wave from flash electroretinogram(ERGthrough filtering technology, eliminate oscillatory potentials interference. METHODS: Full-field ERGs were recorded in 30 eyes of 15 physical check-ups, measured the implicit time and amplitude of a-, b-wave, when the passband was set at 0.6~300Hz and 0.6~70Hz, and correlation was performed among those results by paired t-test.RESULTS: When the passband was set at 0.6~70Hz, a-, b-wave had a single peak, compared with the passband was set at 0.6~300Hz, the implicit time of a-, b-wave was prolonged, amplitude was decreased(PCONCLUSION: A passband of 0.6~70Hz was the best choice to obtain smooth a- and b-waves from the original ERG, It could accurately measure the implicit time and amplitude of a-, b-wave.

  7. Effects of Diversification of Assets on Mean and Variance | Jayeola ...

    African Journals Online (AJOL)

    Diversification is a means of minimizing risk and maximizing returns by investing in a variety of assets of the portfolio. This paper is written to determine the effects of diversification of three types of Assets; uncorrelated, perfectly correlated and perfectly negatively correlated assets on mean and variance. To go about this, ...

  8. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  9. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  10. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    Let P be a set of points in R^d, and let M be a function that maps any subset of P to a positive real number. We examine the problem of computing the exact mean and variance of M when a subset of points in P is selected according to a well-defined random distribution. We consider two distributions...... efficient exact algorithms for computing the mean and variance of several geometric measures when point sets are selected under one of the described random distributions. More specifically, we provide algorithms for the following measures: the bounding box volume, the convex hull volume, the mean pairwise...... distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  11. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  12. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  13. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  14. RepExplore: addressing technical replicate variance in proteomics and metabolomics data analysis.

    Science.gov (United States)

    Glaab, Enrico; Schneider, Reinhard

    2015-07-01

    High-throughput omics datasets often contain technical replicates included to account for technical sources of noise in the measurement process. Although summarizing these replicate measurements by using robust averages may help to reduce the influence of noise on downstream data analysis, the information on the variance across the replicate measurements is lost in the averaging process and therefore typically disregarded in subsequent statistical analyses.We introduce RepExplore, a web-service dedicated to exploit the information captured in the technical replicate variance to provide more reliable and informative differential expression and abundance statistics for omics datasets. The software builds on previously published statistical methods, which have been applied successfully to biomedical omics data but are difficult to use without prior experience in programming or scripting. RepExplore facilitates the analysis by providing a fully automated data processing and interactive ranking tables, whisker plot, heat map and principal component analysis visualizations to interpret omics data and derived statistics. Freely available at http://www.repexplore.tk enrico.glaab@uni.lu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  15. Variances in consumers prices of selected food items among ...

    African Journals Online (AJOL)

    The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...

  16. Variances in consumers prices of selected food Items among ...

    African Journals Online (AJOL)

    The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...

  17. An observation on the variance of a predicted response in ...

    African Journals Online (AJOL)

    ... these properties and computational simplicity. To avoid over fitting, along with the obvious advantage of having a simpler equation, it is shown that the addition of a variable to a regression equation does not reduce the variance of a predicted response. Key words: Linear regression; Partitioned matrix; Predicted response ...

  18. The Threat of Common Method Variance Bias to Theory Building

    Science.gov (United States)

    Reio, Thomas G., Jr.

    2010-01-01

    The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…

  19. Statistical test of reproducibility and operator variance in thin-section modal analysis of textures and phenocrysts in the Topopah Spring member, drill hole USW VH-2, Crater Flat, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Moore, L.M.; Byers, F.M. Jr.; Broxton, D.E.

    1989-06-01

    A thin-section operator-variance test was given to the 2 junior authors, petrographers, by the senior author, a statistician, using 16 thin sections cut from core plugs drilled by the US Geological Survey from drill hole USW VH-2 standard (HCQ) drill core. The thin sections are samples of Topopah Spring devitrified rhyolite tuff from four textural zones, in ascending order: (1) lower nonlithophysal, (2) lower lithopysal, (3) middle nonlithophysal, and (4) upper lithophysal. Drill hole USW-VH-2 is near the center of the Crater Flat, about 6 miles WSW of the Yucca Mountain in Exploration Block. The original thin-section labels were opaqued out with removable enamel and renumbered with alpha-numeric labels. The sliders were then given to the petrographer operators for quantitative thin-section modal (point-count) analysis of cryptocrystalline, spherulitic, granophyric, and void textures, as well as phenocryst minerals. Between operator variance was tested by giving the two petrographers the same slide, and within-operator variance was tested by the same operator the same slide to count in a second test set, administered at least three months after the first set. Both operators were unaware that they were receiving the same slide to recount. 14 figs., 6 tabs.

  20. Asymptotics for Greeks under the constant elasticity of variance model

    OpenAIRE

    Kritski, Oleg L.; Zalmezh, Vladimir F.

    2017-01-01

    This paper is concerned with the asymptotics for Greeks of European-style options and the risk-neutral density function calculated under the constant elasticity of variance model. Formulae obtained help financial engineers to construct a perfect hedge with known behaviour and to price any options on financial assets.

  1. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    V. Bentkus

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  2. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    solve this problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared...

  3. Assessment of the genetic variance of late-onset Alzheimer's disease

    OpenAIRE

    Ridge, Perry G.; Hoyt, Kaitlyn B.; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K.; Haines, Jonathan L.; Mayeux, Richard; Farrer, Lindsay A.; Pericak-Vance, Margaret A.; Schellenberg, Gerard D.; Kauwe, John S.K.; Adams, Perrie M.; Albert, Marilyn S.; Albin, Roger L.; Apostolova, Liana G.

    2016-01-01

    Alzheimer’s disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in APP, TREM2, and UNC5C that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to 1) estimate phenotypic variance explained by genetics, 2) calculate genetic variance explained by known AD...

  4. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  5. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Eighty-eight (88) finger millet (Eleusine coracana (L.) Gaertn.) germplasm collections were tested using augmented randomized complete block design at Adet Agricultural Research Station in 2008 cropping season. The objective of this study was to find out heritability, variance components, variability and genetic advance ...

  6. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...

  7. A variance ratio test of the Zambian foreign- exchange market

    African Journals Online (AJOL)

    Kirstam

    traders and other investors to earn higher-than-average market returns. 6 Key words: variance ratio tests, ... Apart from stocks and equities, foreign-exchange is a key component of the financial market ... Investment banks, commercial banks, local and multinational corporations, brokers and central banks are the major ...

  8. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  9. Adjoint-based global variance reduction approach for reactor analysis problems

    International Nuclear Information System (INIS)

    Zhang, Qiong; Abdel-Khalik, Hany S.

    2011-01-01

    A new variant of a hybrid Monte Carlo-Deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted by the Subspace approach, optimizes the selection of the weight windows for reactor analysis problems where detailed properties of all fuel assemblies are required everywhere in the reactor core. Like the FW-CADIS approach, the Subspace approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. In contrast to FW-CADIS, the Subspace approach identifies the correlations between weight window maps to minimize the computational time required for global variance reduction, i.e., when the solution is required everywhere in the phase space. The correlations are employed to reduce the number of maps required to achieve the same level of variance reduction that would be obtained with single-response maps. Numerical experiments, serving as proof of principle, are presented to compare the Subspace and FW-CADIS approaches in terms of the global reduction in standard deviation. (author)

  10. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k eff estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  11. Shutdown dose rate analysis with CAD geometry, Cartesian/tetrahedral mesh, and advanced variance reduction

    International Nuclear Information System (INIS)

    Biondo, Elliott D.; Davis, Andrew; Wilson, Paul P.H.

    2016-01-01

    Highlights: • A CAD-based shutdown dose rate analysis workflow has been implemented. • Cartesian and superimposed tetrahedral mesh are fully supported. • Biased and unbiased photon source sampling options are available. • Hybrid Monte Carlo/deterministic techniques accelerate photon transport. • The workflow has been validated with the FNG-ITER benchmark problem. - Abstract: In fusion energy systems (FES) high-energy neutrons born from burning plasma activate system components to form radionuclides. The biological dose rate that results from photons emitted by these radionuclides after shutdown—the shutdown dose rate (SDR)—must be quantified for maintenance planning. This can be done using the Rigorous Two-Step (R2S) method, which involves separate neutron and photon transport calculations, coupled by a nuclear inventory analysis code. The geometric complexity and highly attenuating configuration of FES motivates the use of CAD geometry and advanced variance reduction for this analysis. An R2S workflow has been created with the new capability of performing SDR analysis directly from CAD geometry with Cartesian or tetrahedral meshes and with biased photon source sampling, enabling the use of the Consistent Adjoint Driven Importance Sampling (CADIS) variance reduction technique. This workflow has been validated with the Frascati Neutron Generator (FNG)-ITER SDR benchmark using both Cartesian and tetrahedral meshes and both unbiased and biased photon source sampling. All results are within 20.4% of experimental values, which constitutes satisfactory agreement. Photon transport using CADIS is demonstrated to yield speedups as high as 8.5·10 5 for problems using the FNG geometry.

  12. Shutdown dose rate analysis with CAD geometry, Cartesian/tetrahedral mesh, and advanced variance reduction

    Energy Technology Data Exchange (ETDEWEB)

    Biondo, Elliott D., E-mail: biondo@wisc.edu; Davis, Andrew, E-mail: davisa@engr.wisc.edu; Wilson, Paul P.H., E-mail: wilsonp@engr.wisc.edu

    2016-05-15

    Highlights: • A CAD-based shutdown dose rate analysis workflow has been implemented. • Cartesian and superimposed tetrahedral mesh are fully supported. • Biased and unbiased photon source sampling options are available. • Hybrid Monte Carlo/deterministic techniques accelerate photon transport. • The workflow has been validated with the FNG-ITER benchmark problem. - Abstract: In fusion energy systems (FES) high-energy neutrons born from burning plasma activate system components to form radionuclides. The biological dose rate that results from photons emitted by these radionuclides after shutdown—the shutdown dose rate (SDR)—must be quantified for maintenance planning. This can be done using the Rigorous Two-Step (R2S) method, which involves separate neutron and photon transport calculations, coupled by a nuclear inventory analysis code. The geometric complexity and highly attenuating configuration of FES motivates the use of CAD geometry and advanced variance reduction for this analysis. An R2S workflow has been created with the new capability of performing SDR analysis directly from CAD geometry with Cartesian or tetrahedral meshes and with biased photon source sampling, enabling the use of the Consistent Adjoint Driven Importance Sampling (CADIS) variance reduction technique. This workflow has been validated with the Frascati Neutron Generator (FNG)-ITER SDR benchmark using both Cartesian and tetrahedral meshes and both unbiased and biased photon source sampling. All results are within 20.4% of experimental values, which constitutes satisfactory agreement. Photon transport using CADIS is demonstrated to yield speedups as high as 8.5·10{sup 5} for problems using the FNG geometry.

  13. The variance of the temperature distribution in a reactor cell

    International Nuclear Information System (INIS)

    Barrett, P.R.

    1977-01-01

    Local variations in fuel packing density, fuel enrichment, bond-gap thickness, surface asperities, etc. give rise to potentially significant deviations in the temperature distribution in a reactor cell. Treating the second moments of the statistical variations of the fuel thermal conductivity, gap conductance, heat transfer coefficient from can to bulk coolant, etc. by means of specific variances the standard deviation of the temperature distribution is calculated. To account for the temperature dependence of the fuel thermal conductivity and to remover non-linearities in the equations describing the temperature deviations, a linearization approximation is adopted and the resulting equations are solved by means of an expansion over azimuthal harmonics utilizing radially dependent coefficients. Axial conduction effects are neglected in order to simplify algebraic expressions. It is demonstrated that the standard deviation of a quantity that is a linear combination of the harmonics of the temperature has a variance that contains no cross-correlation between different harmonics. (Auth.)

  14. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    Science.gov (United States)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  15. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  16. Estimating shipper/receiver measurement error variances by use of ANOVA

    International Nuclear Information System (INIS)

    Lanning, B.M.

    1993-01-01

    Every measurement made on nuclear material items is subject to measurement errors which are inherent variations in the measurement process that cause the measured value to differ from the true value. In practice, it is important to know the variance (or standard deviation) in these measurement errors, because this indicates the precision in reported results. If a nuclear material facility is generating paired data (e.g., shipper/receiver) where party 1 and party 2 each make independent measurements on the same items, the measurement error variance associated with both parties can be extracted. This paper presents a straightforward method for the use of standard statistical computer packages, with analysis of variance (ANOVA), to obtain valid estimates of measurement variances. Also, with the help of the P-value, significant biases between the two parties can be directly detected without reference to an F-table

  17. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  18. The size variance relationship of business firm growth rates.

    Science.gov (United States)

    Riccaboni, Massimo; Pammolli, Fabio; Buldyrev, Sergey V; Ponta, Linda; Stanley, H E

    2008-12-16

    The relationship between the size and the variance of firm growth rates is known to follow an approximate power-law behavior sigma(S) approximately S(-beta(S)) where S is the firm size and beta(S) approximately 0.2 is an exponent that weakly depends on S. Here, we show how a model of proportional growth, which treats firms as classes composed of various numbers of units of variable size, can explain this size-variance dependence. In general, the model predicts that beta(S) must exhibit a crossover from beta(0) = 0 to beta(infinity) = 1/2. For a realistic set of parameters, beta(S) is approximately constant and can vary from 0.14 to 0.2 depending on the average number of units in the firm. We test the model with a unique industry-specific database in which firm sales are given in terms of the sum of the sales of all their products. We find that the model is consistent with the empirically observed size-variance relationship.

  19. Sample variance in the local measurements of the Hubble constant

    Science.gov (United States)

    Wu, Hao-Yi; Huterer, Dragan

    2017-11-01

    The current >3σ tension between the Hubble constant H0 measured from local distance indicators and from cosmic microwave background is one of the most highly debated issues in cosmology, as it possibly indicates new physics or unknown systematics. In this work, we explore whether this tension can be alleviated by the sample variance in the local measurements, which use a small fraction of the Hubble volume. We use a large-volume cosmological N-body simulation to model the local measurements and to quantify the variance due to local density fluctuations and sample selection. We explicitly take into account the inhomogeneous spatial distribution of type Ia supernovae. Despite the faithful modelling of the observations, our results confirm previous findings that sample variance in the local Hubble constant (H_0^loc) measurements is small; we find σ (H_0^loc)=0.31 {km s^{-1}Mpc^{-1}}, a nearly negligible fraction of the ˜6 km s-1Mpc-1 necessary to explain the difference between the local and global H0 measurements. While the H0 tension could in principle be explained by our local neighbourhood being a underdense region of radius ˜150 Mpc, the extreme required underdensity of such a void (δ ≃ -0.8) makes it very unlikely in a ΛCDM universe, and it also violates existing observational constraints. Therefore, sample variance in a ΛCDM universe cannot appreciably alleviate the tension in H0 measurements even after taking into account the inhomogeneous selection of type Ia supernovae.

  20. Variance-reduced DSMC simulations of low-signal flows

    Science.gov (United States)

    Radtke, Gregg; Al-Mohssen, Husain; Gallis, Michael; Hadjiconstantinou, Nicolas

    2010-11-01

    We present a variance-reduced direct Monte Carlo method for efficient simulation of low-signal kinetic problems. In contrast to previous variance-reduction methods, the method presented here, referred to as VRDSMC, is able to substantially reduce variance with essentially no modification to the standard DSMC algorithm. This is achieved by introducing an auxiliary equilibrium simulation which, via an importance weight formulation, uses the same particle data as the non-equilibrium (DSMC) calculation. The desired hydrodynamic fields are expressed in terms of the difference between the equilibrium and the non-equilibrium results, which yields drastically reduced statistical uncertainty because it exploits the correlation between the two simulations. The resulting formulation is simple to code and provides considerable computational savings for a wide range of problems of practical interest. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    ... and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML algorithm, and significance tests done using the Fmax procedure. Phenotypic, additive genetic and residual variances were heterogeneous across production environments.

  2. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  3. Variance of the Quantum Dwell Time for a Nonrelativistic Particle

    Science.gov (United States)

    Hahne, Gerhard

    2012-01-01

    Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.

  4. Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter

    Science.gov (United States)

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-01

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903

  5. Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Zhiyong Miao

    2015-01-01

    Full Text Available As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor.

  6. Online estimation of Allan variance coefficients based on a neural-extended Kalman filter.

    Science.gov (United States)

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-23

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor.

  7. Estimation of measurement variance in the context of environment statistics

    Science.gov (United States)

    Maiti, Pulakesh

    2015-02-01

    The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

  8. Stochastic Mixing Model with Power Law Decay of Variance

    Science.gov (United States)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  9. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Analysis of Variance of the Effects of a Project’s Location on Key Issues and Challenges in Post-Disaster Reconstruction Projects

    Directory of Open Access Journals (Sweden)

    Dzulkarnaen Ismail

    2017-11-01

    Full Text Available After a disaster, the reconstruction phase is driven by immediate challenges. One of the main challenges in the post-disaster period is the way that reconstruction projects are implemented. Reconstruction cannot move forward until some complex issues are settled. The purposes of this research are to highlight the issues and challenges in post-disaster reconstruction (PDR projects and to determine the significant differences between the issues and challenges in different locations where PDR projects are carried out. The researchers collected data within international non-governmental organisations (INGOs on their experience of working with PDR projects. The findings of this research provide the foundation on which to build strategies for avoiding project failures; this may be useful for PDR project practitioners in the future.

  11. The mean–variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI

    Science.gov (United States)

    Thompson, William H.; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed. PMID:26236216

  12. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    Science.gov (United States)

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  13. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis: a proposal for standardisation

    OpenAIRE

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; H?ilund-Carlsen, Poul Flemming

    2016-01-01

    Background Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertai...

  14. GPR image analysis to locate water leaks from buried pipes by applying variance filters

    Science.gov (United States)

    Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín

    2018-05-01

    Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.

  15. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  16. Concept design theory and model for multi-use space facilities: Analysis of key system design parameters through variance of mission requirements

    Science.gov (United States)

    Reynerson, Charles Martin

    This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.

  17. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  18. Local orbitals by minimizing powers of the orbital variance

    DEFF Research Database (Denmark)

    Jansik, Branislav; Høst, Stinne; Kristensen, Kasper

    2011-01-01

    's correlation consistent basis sets, it is seen that for larger penalties, the virtual orbitals become more local than the occupied ones. We also show that the local virtual HF orbitals are significantly more local than the redundant projected atomic orbitals, which often have been used to span the virtual......It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... be encountered. These disappear when the exponent is larger than one. For a small penalty, the occupied orbitals are more local than the virtual ones. When the penalty is increased, the locality of the occupied and virtual orbitals becomes similar. In fact, when increasing the cardinal number for Dunning...

  19. Minimum variance linear unbiased estimators of loss and inventory

    International Nuclear Information System (INIS)

    Stewart, K.B.

    1977-01-01

    The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs

  20. Variance sources and ratios to estimate energy and nutrient intakes in a sample of adolescents from public schools, Natal, Brazil

    Directory of Open Access Journals (Sweden)

    Severina Carla Vieira Cunha Lima

    2013-04-01

    Full Text Available OBJECTIVE: The aim of this study was to describe the sources of dietary variance, and determine the variance ratios and the number of days needed for estimating the habitual diet of adolescents. METHODS: Two 24 hour food recalls were used for estimating the energy, macronutrient, fatty acid, fiber and cholesterol intakes of 366 adolescents attending Public Schools in Natal, Rio Grande do Norte, Brazil. The variance ratio between the intrapersonal and interpersonal variances, determined by Analysis of Variance, was calculated. The number of days needed for estimating the habitual intake of each nutrient was given by the hypothetical correlation (r>0.9 between the actual and observed nutrient intakes. RESULTS: Sources of interpersonal variation were higher for all nutrients and in both genders. Variance ratios were <1 for all nutrients and higher in women. Two 24 hour dietary recalls were enough to assess energy, carbohydrate, fiber and saturated and monounsaturated fatty acid intakes accurately. However, the accurate assessment of protein, lipid, polyunsaturated fatty acid and cholesterol intakes required three 24 hour recalls. CONCLUSION: Interpersonal dietary variance in adolescents was greater than intrapersonal variance for all nutrients, resulting in a variance ratio of less than 1. Two to three 24 hour recalls, depending on gender and the study nutrient, are necessary for estimating the habitual diet of this population.

  1. On-Line Estimation of Allan Variance Parameters

    National Research Council Canada - National Science Library

    Ford, J

    1999-01-01

    ... (Inertial Measurement Unit) gyros and accelerometers. The on-line method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph...

  2. Variance Components

    CERN Document Server

    Searle, Shayle R; McCulloch, Charles E

    1992-01-01

    WILEY-INTERSCIENCE PAPERBACK SERIES. The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. ". . .Variance Components is an excellent book. It is organized and well written, and provides many references to a variety of topics. I recommend it to anyone with interest in linear models.".

  3. Analysis of health trait data from on-farm computer systems in the U.S. I: Pedigree and genomic variance components estimation

    Science.gov (United States)

    With an emphasis on increasing profit through increased dairy cow production, a negative relationship with fitness traits such as fertility and health traits has become apparent. Decreased cow health can impact herd profitability through increased rates of involuntary culling and decreased or lost m...

  4. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    Science.gov (United States)

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  5. Asymptotics of variance of the lattice point count

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    2008-01-01

    Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008

  6. Estimation models of variance components for farrowing interval in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante Neto

    2009-02-01

    Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

  7. Variance in the chemical composition of dry beans determined from UV spectral fingerprints.

    Science.gov (United States)

    Harnly, James M; Pastor-Corrales, Marcial A; Luthria, Devanand L

    2009-10-14

    Nine varieties of dry beans representing five market classes were grown in three locations (Maryland, Michigan, and Nebraska), and subsamples were collected for each variety (row composites from each plot). Aqueous methanol extracts of ground beans were analyzed in triplicate by UV spectrophotometry. Analysis of variance-principal component analysis was used to quantify the relative variance arising from location, variety, between rows of plants, and analytical uncertainty and to test the significance of differences in the chemical composition. Statistically significant differences were observed between all three locations, between all nine varieties, and between rows for each variety. PCA score plots placed the nine varieties in four categories that corresponded with known taxonomic groupings: (1) black beans (cv. Jaguar and cv. T-39), (2) pinto beans (cv. Buster and cv. Othello), (3) small red beans (cv. Merlot), and (4) great northern (cv. Matterhorn and cv. Weihing) and navy (cv. Seahawk and cv. Vista) beans. The relative plant-to-plant variance, estimated from the between row variance, was 71-79% for 25-40 plants per row.

  8. Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products.

    Science.gov (United States)

    Shorten, P R; Membré, J-M; Pleasants, A B; Kubaczka, M; Soboleva, T K

    2004-06-01

    The objective of this paper was to estimate and partition the variability in the microbial growth model parameters describing the growth of Erwinia carotovora on pasteurised and non-pasteurised vegetable juice from laboratory experiments performed under different temperature-varying conditions. We partitioned the model parameter variance and covariance components into effects due to temperature profile and replicate using a maximum likelihood technique. Temperature profile and replicate were treated as random effects and the food substrate was treated as a fixed effect. The replicate variance component was small indicating a high level of control in this experiment. Our analysis of the combined E. carotovora growth data sets used the Baranyi primary microbial growth model along with the Ratkowsky secondary growth model. The variability in the microbial growth parameters estimated from these microbial growth experiments is essential for predicting the mean and variance through time of the E. carotovora population size in a product supply chain and is the basis for microbiological risk assessment and food product shelf-life estimation. The variance partitioning made here also assists in the management of optimal product distribution networks by identifying elements of the supply chain contributing most to product variability. Copyright 2003 Elsevier B.V.

  9. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  10. Cosmic variance and the measurement of the local Hubble parameter.

    Science.gov (United States)

    Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel

    2013-06-14

    There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.

  11. Neuroticism explains unwanted variance in Implicit Association Tests of personality: possible evidence for an affective valence confound.

    Science.gov (United States)

    Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja

    2013-01-01

    Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.

  12. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  13. Isolating Trait and Method Variance in the Measurement of Callous and Unemotional Traits.

    Science.gov (United States)

    Paiva-Salisbury, Melissa L; Gill, Andrew D; Stickle, Timothy R

    2017-09-01

    To examine hypothesized influence of method variance from negatively keyed items in measurement of callous-unemotional (CU) traits, nine a priori confirmatory factor analysis model comparisons of the Inventory of Callous-Unemotional Traits were evaluated on multiple fit indices and theoretical coherence. Tested models included a unidimensional model, a three-factor model, a three-bifactor model, an item response theory-shortened model, two item-parceled models, and three correlated trait-correlated method minus one models (unidimensional, correlated three-factor, and bifactor). Data were self-reports of 234 adolescents (191 juvenile offenders, 43 high school students; 63% male; ages 11-17 years). Consistent with hypotheses, models accounting for method variance substantially improved fit to the data. Additionally, bifactor models with a general CU factor better fit the data compared with correlated factor models, suggesting a general CU factor is important to understanding the construct of CU traits. Future Inventory of Callous-Unemotional Traits analyses should account for method variance from item keying and response bias to isolate trait variance.

  14. The effect of some estimators of between-study variance on random ...

    African Journals Online (AJOL)

    analysis based on REML yielded the most accurate coverage probability for treatment effect when treatment effects are highly heterogeneous. Keywords: Meta-analysis; random-effects model; between-study variance; and coverage probability ...

  15. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five- factor model Subject RIV: AN - Psychology Impact factor : 3.707, year: 2016

  16. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  17. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  18. Combining abilities and components of variance for ear height in silage maize

    Directory of Open Access Journals (Sweden)

    Sečanski Mile

    2007-01-01

    Full Text Available The aim of this study was to evaluate the following parameters for the ear height of silage maize: variability of inbred lines and their diallel hybrids, superior-parent heterosis, components of genetic variability, heritability and combining ability on the basis of a diallel set. The two-year four-replicate trail was set up according to the randomized block design in the location of Zemun Polje. The analysis of components of genetic variance for ear height indicates that the additive components (D were lower than dominant components (H1 and H2 of genetic variance, while the frequency of dominant (u and recessive genes (v for this observed trait shows that dominant genes prevailed. The results of the Vr/Wr regression analysis point out to superdominance of ear height inheritance. The analysis of variance of combining abilities shows that there were highly significantly positive values of GCA and SCA for ear height in both years of investigation. Non-additive gene effects played an important role in inheritance of this trait, which was illustrated by the GCS to SCA ratio < 1.

  19. Genetically controlled environmental variance for sternopleural bristles in Drosophila melanogaster - an experimental test of a heterogeneous variance model

    DEFF Research Database (Denmark)

    Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker

    2007-01-01

    The objective of this study was to test the hypothesis that the environmental variance of sternopleural bristle number in Drosophila melanogaster is partly under genetic control. We used data from 20 inbred lines and 10 control lines to test this hypothesis. Two models were used: a standard quant...... as genes affecting the environmental variance may be important for adaptation to changing environmental conditions...

  20. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  1. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  2. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics

    Science.gov (United States)

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  3. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  4. Numerical Inversion with Full Estimation of Variance-Covariance Matrix

    Science.gov (United States)

    Saltogianni, Vasso; Stiros, Stathis

    2016-04-01

    -point, stochastic optimal solutions are computed as the center of gravity of these sets. A full Variance-Covariance Matrix (VCM) of each solution can be directly computed as second statistical moment. The overall method and the software have been tested with synthetic data (accuracy-oriented approach) in the modeling of magma chambers in the Santorini volcano and the modeling of double-fault earthquakes, i.e. to inversion problems with up to 18 unknowns.

  5. Mean Variance Vulnerability

    OpenAIRE

    Thomas Eichner

    2008-01-01

    This paper transfers the concept of Gollier and Pratt's (Gollier, C., J. W. Pratt. 1996. Risk vulnerability and the tempering effect of background risk. Econometrica 64 1109-1123) risk vulnerability into mean variance preferences. Risk vulnerability is shown to be equivalent to the slope of the mean variance indifference curve being decreasing in mean and increasing in variance. Next, we introduce the notion of mean variance vulnerability to link the concepts of decreasing absolute risk avers...

  6. Reduction of treatment delivery variances with a computer-controlled treatment delivery system

    International Nuclear Information System (INIS)

    Fraass, B.A.; Lash, K.L.; Matrone, G.M.; Lichter, A.S.

    1997-01-01

    Purpose: To analyze treatment delivery variances for 3-D conformal therapy performed at various levels of treatment delivery automation, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system. Materials and Methods: All external beam treatments performed in our department during six months of 1996 were analyzed to study treatment delivery variances versus treatment complexity. Treatments for 505 patients (40,641 individual treatment ports) on four treatment machines were studied. All treatment variances noted by treatment therapists or quality assurance reviews (39 in all) were analyzed. Machines 'M1' (CLinac (6(100))) and 'M2' (CLinac 1800) were operated in a standard manual setup mode, with no record and verify system (R/V). Machines 'M3' (CLinac 2100CD/MLC) and ''M4'' (MM50 racetrack microtron system with MLC) treated patients under the control of a computer-controlled conformal radiotherapy system (CCRS) which 1) downloads the treatment delivery plan from the planning system, 2) performs some (or all) of the machine set-up and treatment delivery for each field, 3) monitors treatment delivery, 4) records all treatment parameters, and 5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3, so it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments (ports), non-axial and non-coplanar plans, multi-segment intensity modulation, and pseudo-isocentric treatments (and other plans with computer-controlled table motions). Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines, so this analysis

  7. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  8. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  9. Measurement of Allan variance and phase noise at fractions of a millihertz

    Science.gov (United States)

    Conroy, Bruce L.; Le, Duc

    1990-01-01

    Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.

  10. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...... of the implied volatility of variance smile -- some clearly at odds with the upward-sloping volatility skew observed in variance markets....

  11. Experimental metastasis: a novel application of the variance-to-mean power function.

    Science.gov (United States)

    Kendal, W S; Frost, P

    1987-11-01

    An empiric power function relationship between a population's mean density (m) and its corresponding variance (v), written v = a.mb (a, b, constants), may be applied to the analysis of experimentally induced pulmonary metastases within syngeneic (C57BL/6 X C3H)F1 mice. The mean and variance of the numbers of resultant B16 F1 and B16 F10 melanoma metastases strongly correlated with the power function (r2 greater than 0.8). The exponent b was 1.4 +/- 0.1 and 1.6 +/- 0.2 for the F1 and F10 melanomas, respectively, indicating a clustering of metastases within certain mice. This clustering of metastases within more highly affected animals may reflect a diffusion-limited aggregation of tumor cells within the circulation and the resultant greater ability of these aggregates to form metastases.

  12. A Study of the Allan Variance for Constant-Mean Nonstationary Processes

    Science.gov (United States)

    Xu, Haotian; Guerrier, Stephane; Molinari, Roberto; Zhang, Yuming

    2017-08-01

    The Allan Variance (AV) is a widely used quantity in areas focusing on error measurement as well as in the general analysis of variance for autocorrelated processes in domains such as engineering and, more specifically, metrology. The form of this quantity is widely used to detect noise patterns and indications of stability within signals. However, the properties of this quantity are not known for commonly occurring processes whose covariance structure is non-stationary and, in these cases, an erroneous interpretation of the AV could lead to misleading conclusions. This paper generalizes the theoretical form of the AV to some non-stationary processes while at the same time being valid also for weakly stationary processes. Some simulation examples show how this new form can help to understand the processes for which the AV is able to distinguish these from the stationary cases and hence allow for a better interpretation of this quantity in applied cases.

  13. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  14. Determinations of dose mean of specific energy for conventional x-rays by variance-measurements

    International Nuclear Information System (INIS)

    Forsberg, B.; Jensen, M.; Lindborg, L.; Samuelson, G.

    1978-05-01

    The dose mean value (zeta) of specific energy of a single event distribution is related to the variance of a multiple event distribution in a simple way. It is thus possible to determine zeta from measurements in high dose rates through observations of the variations in the ionization current from for instance an ionization chamber, if other parameters contribute negligibly to the total variance. With this method is has earlier been possible to obtain results down to about 10 nm in a beam of Co60-γ rays, which is one order of magnitude smaller than the sizes obtainable with the traditional technique. This advantage together with the suggestion that zeta could be an important parameter in radiobiology make further studies of the applications of the technique motivated. So far only data from measurements in beams of a radioactive nuclide has been reported. This paper contains results from measurements in a highly stabilized X-ray beam. The preliminary analysis shows that the variance technique has given reasonable results for object sizes in the region of 0.08 μm to 20 μm (100 kV, 1.6 Al, HVL 0.14 mm Cu). The results were obtained with a proportional counter except for the larger object sizes, where an ionization chamber was used. The measurements were performed at dose rates between 1 Gy/h and 40 Gy/h. (author)

  15. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  16. Variance of foot biomechanical parameters across age groups for the elderly people in Romania

    Science.gov (United States)

    Deselnicu, D. C.; Vasilescu, A. M.; Militaru, G.

    2017-10-01

    The paper presents the results of a fieldwork study conducted in order to analyze major causal factors that influence the foot deformities and pathologies of elderly women in Romania. The study has an exploratory and descriptive nature and uses quantitative methodology. The sample consisted of 100 elderly women from Romania, ranging from 55 to over 75 years of age. The collected data was analyzed on multiple dimensions using a statistic analysis software program. The analysis of variance demonstrated significant differences across age groups in terms of several biomechanical parameters such as travel speed, toe off phase and support phase in the case of elderly women.

  17. Using adapted budget cost variance techniques to measure the impact of Lean – based on empirical findings in Lean case studies

    DEFF Research Database (Denmark)

    Kristensen, Thomas Borup

    2015-01-01

    Lean is dominating management philosophy, but the management accounting techniques that best supports this is still not fully understood. Especially how Lean fits traditional budget variance analysis, which is a main theme of every management accounting textbook. I have studied three Scandinavian...... excellent Lean performing companies and their development of budget variance analysis techniques. Based on these empirical findings techniques are presented to calculate cost and cost variances in the Lean companies. First of all, a cost variance is developed to calculate the Lean cost benefits within...... the budget period by using master budget standards and updated standards. The variance between them represents systematic Lean cost improvements. Secondly, an additional cost variance calculation technique is introduced to assess improved and systematic cost variances across multiple budget periods...

  18. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Yu, Zuwei

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets

  19. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zuwei Yu [Purdue University, West Lafayette, IN (United States). Indiana State Utility Forecasting Group and School of Industrial Engineering

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  20. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zuwei [Indiana State Utility Forecasting Group and School of Industrial Engineering, Purdue University, Room 334, 1293 A.A. Potter, West Lafayette, IN 47907 (United States)

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets.

  1. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Zuwei Yu

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  2. The Multi-allelic Genetic Architecture of a Variance-Heterogeneity Locus for Molybdenum Concentration in Leaves Acts as a Source of Unexplained Additive Genetic Variance.

    Directory of Open Access Journals (Sweden)

    Simon K G Forsberg

    2015-11-01

    Full Text Available Genome-wide association (GWA analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or "missing heritability". Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1, and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975 as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations.

  3. Increasing Genetic Variance of Body Mass Index during the Swedish Obesity Epidemic

    Science.gov (United States)

    Rokholm, Benjamin; Silventoinen, Karri; Tynelius, Per; Gamborg, Michael; Sørensen, Thorkild I. A.; Rasmussen, Finn

    2011-01-01

    Background and Objectives There is no doubt that the dramatic worldwide increase in obesity prevalence is due to changes in environmental factors. However, twin and family studies suggest that genetic differences are responsible for the major part of the variation in adiposity within populations. Recent studies show that the genetic effects on body mass index (BMI) may be stronger when combined with presumed risk factors for obesity. We tested the hypothesis that the genetic variance of BMI has increased during the obesity epidemic. Methods The data comprised height and weight measurements of 1,474,065 Swedish conscripts at age 18–19 y born between 1951 and 1983. The data were linked to the Swedish Multi-Generation Register and the Swedish Twin Register from which 264,796 full-brother pairs, 1,736 monozygotic (MZ) and 1,961 dizygotic (DZ) twin pairs were identified. The twin pairs were analysed to identify the most parsimonious model for the genetic and environmental contribution to BMI variance. The full-brother pairs were subsequently divided into subgroups by year of birth to investigate trends in the genetic variance of BMI. Results The twin analysis showed that BMI variation could be explained by additive genetic and environmental factors not shared by co-twins. On the basis of the analyses of the full-siblings, the additive genetic variance of BMI increased from 4.3 [95% CI 4.04–4.53] to 7.9 [95% CI 7.28–8.54] within the study period, as did the unique environmental variance, which increased from 1.4 [95% CI 1.32–1.48] to 2.0 [95% CI 1.89–2.22]. The BMI heritability increased from 75% to 78.8%. Conclusion The results confirm the hypothesis that the additive genetic variance of BMI has increased strongly during the obesity epidemic. This suggests that the obesogenic environment has enhanced the influence of adiposity related genes. PMID:22087252

  4. Increasing genetic variance of body mass index during the Swedish obesity epidemic.

    Directory of Open Access Journals (Sweden)

    Benjamin Rokholm

    Full Text Available BACKGROUND AND OBJECTIVES: There is no doubt that the dramatic worldwide increase in obesity prevalence is due to changes in environmental factors. However, twin and family studies suggest that genetic differences are responsible for the major part of the variation in adiposity within populations. Recent studies show that the genetic effects on body mass index (BMI may be stronger when combined with presumed risk factors for obesity. We tested the hypothesis that the genetic variance of BMI has increased during the obesity epidemic. METHODS: The data comprised height and weight measurements of 1,474,065 Swedish conscripts at age 18-19 y born between 1951 and 1983. The data were linked to the Swedish Multi-Generation Register and the Swedish Twin Register from which 264,796 full-brother pairs, 1,736 monozygotic (MZ and 1,961 dizygotic (DZ twin pairs were identified. The twin pairs were analysed to identify the most parsimonious model for the genetic and environmental contribution to BMI variance. The full-brother pairs were subsequently divided into subgroups by year of birth to investigate trends in the genetic variance of BMI. RESULTS: The twin analysis showed that BMI variation could be explained by additive genetic and environmental factors not shared by co-twins. On the basis of the analyses of the full-siblings, the additive genetic variance of BMI increased from 4.3 [95% CI 4.04-4.53] to 7.9 [95% CI 7.28-8.54] within the study period, as did the unique environmental variance, which increased from 1.4 [95% CI 1.32-1.48] to 2.0 [95% CI 1.89-2.22]. The BMI heritability increased from 75% to 78.8%. CONCLUSION: The results confirm the hypothesis that the additive genetic variance of BMI has increased strongly during the obesity epidemic. This suggests that the obesogenic environment has enhanced the influence of adiposity related genes.

  5. Speckle variance optical coherence tomography of blood flow in the beating mouse embryonic heart.

    Science.gov (United States)

    Grishina, Olga A; Wang, Shang; Larina, Irina V

    2017-05-01

    Efficient separation of blood and cardiac wall in the beating embryonic heart is essential and critical for experiment-based computational modelling and analysis of early-stage cardiac biomechanics. Although speckle variance optical coherence tomography (SV-OCT) relying on calculation of intensity variance over consecutively acquired frames is a powerful approach for segmentation of fluid flow from static tissue, application of this method in the beating embryonic heart remains challenging because moving structures generate SV signal indistinguishable from the blood. Here, we demonstrate a modified four-dimensional SV-OCT approach that effectively separates the blood flow from the dynamic heart wall in the beating mouse embryonic heart. The method takes advantage of the periodic motion of the cardiac wall and is based on calculation of the SV signal over the frames corresponding to the same phase of the heartbeat cycle. Through comparison with Doppler OCT imaging, we validate this speckle-based approach and show advantages in its insensitiveness to the flow direction and velocity as well as reduced influence from the heart wall movement. This approach has a potential in variety of applications relying on visualization and segmentation of blood flow in periodically moving structures, such as mechanical simulation studies and finite element modelling. Picture: Four-dimensional speckle variance OCT imaging shows the blood flow inside the beating heart of an E8.5 mouse embryo. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Variance in population firing rate as a measure of slow time-scale correlation

    Directory of Open Access Journals (Sweden)

    Adam C. Snyder

    2013-12-01

    Full Text Available Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research.

  7. A Historical Perspective on the Development of the Allan Variances and Their Strengths and Weaknesses.

    Science.gov (United States)

    Allan, David W; Levine, Judah

    2016-04-01

    Over the past 50 years, variances have been developed for characterizing the instabilities of precision clocks and oscillators. These instabilities are often modeled as nonstationary processes, and the variances have been shown to be well-behaved and to be unbiased, efficient descriptors of these types of processes. This paper presents a historical overview of the development of these variances. The time-domain and frequency-domain formulations are presented and their development is described. The strengths and weaknesses of these characterization metrics are discussed. These variances are also shown to be useful in other applications, such as in telecommunication.

  8. Heritable Micro-environmental Variance Covaries with Fitness in an Outbred Population ofDrosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; McGuigan, Katrina; Blows, Mark W

    2017-08-01

    The genetic basis of stochastic variation within a defined environment, and the consequences of such micro-environmental variance for fitness are poorly understood . Using a multigenerational breeding design in Drosophila serrata , we demonstrated that the micro-environmental variance in a set of morphological wing traits in a randomly mating population had significant additive genetic variance in most single wing traits. Although heritability was generally low (micro-environmental variance is an evolvable trait. Multivariate analyses demonstrated that the micro-environmental variance in wings was genetically correlated among single traits, indicating that common mechanisms of environmental buffering exist for this functionally related set of traits. In addition, through the dominance genetic covariance between the major axes of micro-environmental variance and fitness, we demonstrated that micro-environmental variance shares a genetic basis with fitness, and that the pattern of selection is suggestive of variance-reducing selection acting on micro-environmental variance. Copyright © 2017 by the Genetics Society of America.

  9. Variance component estimation of a female fertility trait in two ...

    African Journals Online (AJOL)

    USER

    of animals for possible use in a Southern African National analysis. Materials and Methods. Field data was obtained from the Integrated Registration and Genetic Information System. (INTERGIS) of South Africa for purebred Afrikaner, Drakensberger, SA Angus and Simmentaler beef cattle breeds for the period 1976 to 1998.

  10. Effect of natural inbreeding on variance structure in tests of wind pollination Douglas-fir progenies.

    Science.gov (United States)

    Frank C. Sorensen; T.L. White

    1988-01-01

    Studies of the mating habits of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) have shown that wind-pollination families contain a small proportion of very slow-growing natural inbreds.The effect of these very small trees on means, variances, and variance ratios was evaluated for height and diameter in a 16-year-old plantation by...

  11. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    Science.gov (United States)

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  12. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    Science.gov (United States)

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  13. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework

  14. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    Energy Technology Data Exchange (ETDEWEB)

    Song Ningfang; Yuan Rui; Jin Jing, E-mail: rayleing@139.com [School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191 (China)

    2011-09-15

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 {sup 0}/h{sup 2}, K = 1.1714exp-3 {sup 0}/h{sup 1.5}, B = 1.3185exp-3 {sup 0}/h, N = 5.982exp-4 {sup 0}/h{sup 0.5} and Q = 5.197exp-7 {sup 0} in real time, and tracks degradation of gyro performance from initail values, R = 0.651 {sup 0}/h{sup 2}, K = 0.801 {sup 0}/h{sup 1.5}, B = 0.385 {sup 0}/h, N = 0.0874 {sup 0}/h{sup 0.5} and Q = 8.085exp-5 {sup 0}, to final estimations, R = 9.548 {sup 0}/h{sup 2}, K = 9.524 {sup 0}/h{sup 1.5}, B = 2.234 {sup 0}/h, N = 0.5594 {sup 0}/h{sup 0.5} and Q = 5.113exp-4 {sup 0}, due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  15. 29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Variance of the bond/escrow and sale-contract requirements... Statutory Requirements § 4204.11 Variance of the bond/escrow and sale-contract requirements. (a) General rule. A purchaser's bond or escrow under section 4204(a)(1)(B) of ERISA and the sale-contract provision...

  16. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  17. Thermal noise variance of a receive radiofrequency coil as a respiratory motion sensor

    NARCIS (Netherlands)

    Andreychenko, A.|info:eu-repo/dai/nl/341697672; Raaijmakers, A. J E|info:eu-repo/dai/nl/304819662; Sbrizzi, A.|info:eu-repo/dai/nl/341735868; Crijns, S. P M|info:eu-repo/dai/nl/341021296; Lagendijk, J. J W|info:eu-repo/dai/nl/07011868X; Luijten, P. R.|info:eu-repo/dai/nl/304821098; van den Berg, C. A T|info:eu-repo/dai/nl/304817422

    2017-01-01

    Purpose: Development of a passive respiratory motion sensor based on the noise variance of the receive coil array. Methods: Respiratory motion alters the body resistance. The noise variance of an RF coil depends on the body resistance and, thus, is also modulated by respiration. For the noise

  18. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  19. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  20. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Directory of Open Access Journals (Sweden)

    Marcin Studnicki

    2016-06-01

    Full Text Available The main objectives of multi-environmental trials (METs are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E interactions. Linear mixed models (LMMs with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011 from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset.

  1. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  2. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , which results in a high operating cost. For this case, a two-stage extension of the mean-variance approach provides the best trade-off between the expected cost and its variance. It is demonstrated that by using a constraint back-off technique in the specific case study, certainty equivalence EMPC can...

  3. Variance Component Quantitative Trait Locus Analysis for Body Weight Traits in Purebred Korean Native Chicken

    Directory of Open Access Journals (Sweden)

    Muhammad Cahyadi

    2016-01-01

    Full Text Available Quantitative trait locus (QTL is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC. F1 samples (n = 595 were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3 for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001 and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003. Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007 and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027 were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds.

  4. Realized (co)variances of eurozone sovereign yields during the crisis: The impact of news and the Securities Markets Programme

    NARCIS (Netherlands)

    Beetsma, R.M.W.J.; de Jong, Frank; Giuliodori, M.; Widijanto, D.

    We use realized variances and covariances based on intraday data to measure the dependence structure of eurozone sovereign yields. Our analysis focuses on the impact of news, obtained from the Eurointelligence newsflash, on the dependence structure. More news tends to raise the volatility of yields

  5. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis: a proposal for standardisation.

    Science.gov (United States)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; Høilund-Carlsen, Poul Flemming

    2016-09-21

    Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertainty of repeated measurements and obtain relevant repeatability coefficients (RCs) which have a unique relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed twice by the same observer to study intra-observer agreement. In study 2, 14 patients with glioma were scanned up to five times. Resulting 49 scans were assessed by three observers to examine inter-observer agreement. Outcome variables were SUVmax in study 1 and cerebral total hemispheric glycolysis (THG) in study 2. In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter-observer differences were negligible compared to differences owing to other factors; between observer 1 and 2: -10 (95 % CI: -352 to 332) and between observer 1 vs 3: 28 (95 % CI: -313 to 370). VCA is an appealing approach for weighing different sources of variation against each other, summarised as RCs. The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components.

  6. A more realistic estimate of the variances and systematic errors in spherical harmonic geomagnetic field models

    DEFF Research Database (Denmark)

    Lowes, F.J.; Olsen, Nils

    2004-01-01

    , led to quite inaccurate variance estimates. We estimate correction factors which range from 1/4 to 20, with the largest increases being for the zonal, m = 0, and sectorial, m = n, terms. With no correction, the OSVM variances give a mean-square vector field error of prediction over the Earth's surface......Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...

  7. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... year. Kalman filter, a flexible statistical estimator, is used to combine the inexact prediction of the rubber production with an equally inexact rubber yield, tree ... tapping system measurements to obtain an optimal estimate of one year ahead rubber production. ...... tation management prevision gap of 55%.

  8. Analytical expression for variance of homogeneous-position quantum walk with decoherent position

    Science.gov (United States)

    Annabestani, Mostafa

    2018-02-01

    We have derived an analytical expression for variance of homogeneous-position decoherent quantum walk with general form of noise on its position, and have shown that, while the quadratic (t^2) term of variance never changes by position decoherency, the linear term ( t) does and always increases the variance. We study the walker with ability to tunnel out to d nearest neighbors as an example and compare our result with former studies. We also show that, although our expression has been derived for asymptotic case, the rapid decay of time-dependent terms causes the expressions to be correct with a good accuracy even after dozens of steps.

  9. Identification of melanoma cells: a method based in mean variance of signatures via spectral densities.

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Angulo-Molina, Aracely

    2017-04-01

    In this paper a new methodology to detect and differentiate melanoma cells from normal cells through 1D-signatures averaged variances calculated with a binary mask is presented. The sample images were obtained from histological sections of mice melanoma tumor of 4 [Formula: see text] in thickness and contrasted with normal cells. The results show that melanoma cells present a well-defined range of averaged variances values obtained from the signatures in the four conditions used.

  10. Variance components estimation for farrowing traits of three purebred pigs in Korea

    Directory of Open Access Journals (Sweden)

    Bryan Irvine Lopez

    2017-09-01

    Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.

  11. Depressive status explains a significant amount of the variance in COPD assessment test (CAT) scores.

    Science.gov (United States)

    Miravitlles, Marc; Molina, Jesús; Quintano, José Antonio; Campuzano, Anna; Pérez, Joselín; Roncero, Carlos

    2018-01-01

    COPD assessment test (CAT) is a short, easy-to-complete health status tool that has been incorporated into the multidimensional assessment of COPD in order to guide therapy; therefore, it is important to understand the factors determining CAT scores. This is a post hoc analysis of a cross-sectional, observational study conducted in respiratory medicine departments and primary care centers in Spain with the aim of identifying the factors determining CAT scores, focusing particularly on the cognitive status measured by the Mini-Mental State Examination (MMSE) and levels of depression measured by the short Beck Depression Inventory (BDI). A total of 684 COPD patients were analyzed; 84.1% were men, the mean age of patients was 68.7 years, and the mean forced expiratory volume in 1 second (%) was 55.1%. Mean CAT score was 21.8. CAT scores correlated with the MMSE score (Pearson's coefficient r =-0.371) and the BDI ( r =0.620), both p CAT scores and explained 45% of the variability. However, a model including only MMSE and BDI scores explained up to 40% and BDI alone explained 38% of the CAT variance. CAT scores are associated with clinical variables of severity of COPD. However, cognitive status and, in particular, the level of depression explain a larger percentage of the variance in the CAT scores than the usual COPD clinical severity variables.

  12. Verification of the history-score moment equations for weight-window variance reduction

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Clell J [Los Alamos National Laboratory; Sood, Avneet [Los Alamos National Laboratory; Booth, Thomas E [Los Alamos National Laboratory; Shultis, J. Kenneth [KANSAS STATE UNIV.

    2010-12-06

    The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,

  13. The influence of local spring temperature variance on temperature sensitivity of spring phenology.

    Science.gov (United States)

    Wang, Tao; Ottlé, Catherine; Peng, Shushi; Janssens, Ivan A; Lin, Xin; Poulter, Benjamin; Yue, Chao; Ciais, Philippe

    2014-05-01

    The impact of climate warming on the advancement of plant spring phenology has been heavily investigated over the last decade and there exists great variability among plants in their phenological sensitivity to temperature. However, few studies have explicitly linked phenological sensitivity to local climate variance. Here, we set out to test the hypothesis that the strength of phenological sensitivity declines with increased local spring temperature variance, by synthesizing results across ground observations. We assemble ground-based long-term (20-50 years) spring phenology database (PEP725 database) and the corresponding climate dataset. We find a prevalent decline in the strength of phenological sensitivity with increasing local spring temperature variance at the species level from ground observations. It suggests that plants might be less likely to track climatic warming at locations with larger local spring temperature variance. This might be related to the possibility that the frost risk could be higher in a larger local spring temperature variance and plants adapt to avoid this risk by relying more on other cues (e.g., high chill requirements, photoperiod) for spring phenology, thus suppressing phenological responses to spring warming. This study illuminates that local spring temperature variance is an understudied source in the study of phenological sensitivity and highlight the necessity of incorporating this factor to improve the predictability of plant responses to anthropogenic climate change in future studies. © 2013 John Wiley & Sons Ltd.

  14. Fractal fluctuations and quantum-like chaos in the brain by analysis of variability of brain waves: A new method based on a fractal variance function and random matrix theory: A link with El Naschie fractal Cantorian space-time and V. Weiss and H. Weiss golden ratio in brain

    International Nuclear Information System (INIS)

    Conte, Elio; Khrennikov, Andrei; Federici, Antonio; Zbilut, Joseph P.

    2009-01-01

    We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.

  15. The impact of news ans the SMP on realized (co)variances in the Eurozone sovereign debt market

    NARCIS (Netherlands)

    Beetsma, R.; de Jong, F.; Giuliodori, M.; Widijanto, D.

    2014-01-01

    We use realized variances and covariances based on intraday data from Eurozone sovereign bond market to measure the dependence structure of eurozone sovereign yields. Our analysis focuses on the impact of news, obtained from the Eurointelligence newsash, on the dependence structure. More news raises

  16. Reporting explained variance

    Science.gov (United States)

    Good, Ron; Fletcher, Harold J.

    The importance of reporting explained variance (sometimes referred to as magnitude of effects) in ANOVA designs is discussed in this paper. Explained variance is an estimate of the strength of the relationship between treatment (or other factors such as sex, grade level, etc.) and dependent variables of interest to the researcher(s). Three methods that can be used to obtain estimates of explained variance in ANOVA designs are described and applied to 16 studies that were reported in recent volumes of this journal. The results show that, while in most studies the treatment accounts for a relatively small proportion of the variance in dependent variable scores., in., some studies the magnitude of the treatment effect is respectable. The authors recommend that researchers in science education report explained variance in addition to the commonly reported tests of significance, since the latter are inadequate as the sole basis for making decisions about the practical importance of factors of interest to science education researchers.

  17. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean......The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before...

  18. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  19. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  20. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Directory of Open Access Journals (Sweden)

    Zdravko Kačič

    2009-01-01

    Full Text Available This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE. The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  1. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Science.gov (United States)

    Kos, Marko; Grašič, Matej; Kačič, Zdravko

    2009-12-01

    This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE). The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  2. Concerns about a variance approach to X-ray diffractometric estimation of microfibril angle in wood

    Science.gov (United States)

    Steve P. Verrill; David E. Kretschmann; Victoria L. Herian; Michael C. Wiemann; Harry A. Alden

    2011-01-01

    In this article, we raise three technical concerns about Evans’ 1999 Appita Journal “variance approach” to estimating microfibril angle (MFA). The first concern is associated with the approximation of the variance of an X-ray intensity half-profile by a function of the MFA and the natural variability of the MFA. The second concern is associated with the approximation...

  3. On the expected value and variance for an estimator of the spatio-temporal product density function

    DEFF Research Database (Denmark)

    Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge

    Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...

  4. Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.

    Science.gov (United States)

    He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne

    2016-04-01

    Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more

  5. The prevalence, prevention and multilevel variance of pressure ulcers in Norwegian hospitals: a cross-sectional study.

    Science.gov (United States)

    Bredesen, Ida Marie; Bjøro, Karen; Gunningberg, Lena; Hofoss, Dag

    2015-01-01

    Pressure ulcers are preventable adverse events. Organizational differences may influence the quality of prevention across wards and hospitals. To investigate the prevalence of pressure ulcers, patient-related risk factors, the use of preventive measures and how much of the pressure ulcer variance is at patient, ward and hospital level. A cross-sectional study. Six of the 11 invited hospitals in South-Eastern Norway agreed to participate. Inpatients ≥18 years at 88 somatic hospital wards (N=1209). Patients in paediatric and maternity wards and day surgery patients were excluded. The methodology for pressure ulcer prevalence studies developed by the European Pressure Ulcer Advisory Panel was used, including demographic data, the Braden scale, skin assessment, the location and severity of pressure ulcers and preventive measures. Multilevel analysis was used to investigate variance across hierarchical levels. The prevalence was 18.2% for pressure ulcer category I-IV, 7.2% when category I was excluded. Among patients at risk of pressure ulcers, 44.3% had pressure redistributing support surfaces in bed and only 22.3% received planned repositioning in bed. Multilevel analysis showed that although the dominant part of the variance in the occurrence of pressure ulcers was at patient level there was also a significant amount of variance at ward level. There was, however, no significant variance at hospital level. Pressure ulcer prevalence in this Norwegian sample is similar to comparable European studies. At-risk patients were less likely to receive preventive measures than patients in earlier studies. There was significant variance in the occurrence of pressure ulcers at ward level but not at hospital level, indicating that although interventions for improvement are basically patient related, improvement of procedures and organization at ward level may also be important. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas

    2011-01-01

    of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...

  7. Specific Variance of the WPPSI Subtests at Six Age Levels.

    Science.gov (United States)

    Carlson, Les; Reynolds, Cecil R.

    Factor analyses of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI) was conducted. The random sample included 100 boys and 100 girls beginning at age four with increments of 6 months up to age 6 1/2. The intercorrelation matrix of the 11 WPPSI subtests at each of the age levels was factor analyzed, and the percent of common,…

  8. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  9. Gender Variance and the Performance of Small and Medium Scale ...

    African Journals Online (AJOL)

    Small and medium scale industries usually tend to develop and grow into medium and large scale industries. This form of growth yields to the development of the economy. However, thesuccess of SMEs depends on the effort of the entrepreneurs through concerted effort of financing and effective managerial skills.

  10. Variance, Violence, and Democracy: A Basic Microeconomic Model of Terrorism

    Directory of Open Access Journals (Sweden)

    John A. Sautter

    2010-01-01

    Full Text Available Much of the debate surrounding contemporary studies of terrorism focuses upon transnational terrorism. However, historical and contemporary evidence suggests that domestic terrorism is a more prevalent and pressing concern. A formal microeconomic model of terrorism is utilized here to understand acts of political violence in a domestic context within the domain of democratic governance.This article builds a very basic microeconomic model of terrorist decision making to hypothesize how a democratic government might influence the sorts of strategies that terrorists use. Mathematical models have been used to explain terrorist behavior in the past. However, the bulk of inquires in this area have only focused on the relationship between terrorists and the government, or amongst terrorists themselves. Central to the interpretation of the terrorist conflict presented here is the idea that voters (or citizens are also one of the important determinants of how a government will respond to acts of terrorism.

  11. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Feed efficiency is of major economic importance in beef production. The objective of this work was to evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple-trait ...

  12. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Mike

    2013-03-09

    Mar 9, 2013 ... Bonsmara bulls and evaluation of alternative measures of feed efficiency. M.D. MacNeil. 1,2,3 ... trait animal model genetic evaluations and alternative genetic predictors of feed efficiency were derived from ... data were collected from the centralised bull testing stations under the supervision of South Africa's.

  13. Variance misperception explains illusions of confidence in simple perceptual decisions

    NARCIS (Netherlands)

    Zylberberg, A.; Roelfsema, Pieter R; Sigman, Mariano

    Confidence in a perceptual decision is a judgment about the quality of the sensory evidence. The quality of the evidence depends not only on its strength ('signal') but critically on its reliability ('noise'), but the separate contribution of these quantities to the formation of confidence judgments

  14. Variance misperception explains illusions of confidence in simple perceptual decisions

    NARCIS (Netherlands)

    Zylberberg, Ariel; Roelfsema, Pieter R.; Sigman, Mariano

    2014-01-01

    Confidence in a perceptual decision is a judgment about the quality of the sensory evidence. The quality of the evidence depends not only on its strength ('signal') but critically on its reliability ('noise'), but the separate contribution of these quantities to the formation of confidence judgments

  15. Evaluating scaled windowed variance methods for estimating the Hurst coefficient of time series

    OpenAIRE

    Cannon, Michael J.; Percival, Donald B.; Caccia, David C.; Raymond, Gary M.; Bassingthwaighte, James B.

    1997-01-01

    Three-scaled windowed variance methods (standard, linear regression detrended, and brdge detrended) for estimating the Hurst coefficient (H) are evaluated. The Hurst coefficient, with 0 < H < 1, characterizes self-similar decay in the time-series autocorrelation function. The scaled windowed variance methods estimate H for fractional Brownian motion (fBm) signals which are cumulative sums of fractional Gaussian noise (fGn) signals. For all three methods both the bias and standard deviation of...

  16. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    Science.gov (United States)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  17. Partitioning of genomic variance using prior biological information

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per

    of single nucleotide polymorphism (SNP) data and trait phenotypes and can account for a much larger fraction of the heritable component of the trait. A disadvantage is that this “black box” modelling approach does not provide any insight into the biological mechanisms underlying the trait. We propose...

  18. Prevalence of intellectual disabilities in Norway: Domestic variance.

    Science.gov (United States)

    Søndenaa, E; Rasmussen, K; Nøttestad, J A; Lauvrud, C

    2010-02-01

    Based on national registers, the prevalence of intellectual disability (ID) in Norway is estimated to be 0.44 per 100 inhabitants. This study aimed to examine geographic and urban-rural differences in the prevalence of ID in Norway. Methods A survey based on the national register. Financial transfers intended to provide equal services to people with ID are based on these reports. Results A higher prevalence was found in the North region of Norway. A negative correlation between the population density and the prevalence of ID was also found. Conclusion There was considerable geographic and urban-rural differences in the prevalence of ID, which may be attributable to not only the large diversity of services, but also some other factors. The results were discussed with respect to the deinstitutionalisation progress, resource-intensive services and costs. Differences also reflect some problems in diagnosing ID in people having mild ID.

  19. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, K.; Sørensen, T.I.A.

    2007-01-01

    Diffusion weighted imaging (DWI) and tractography allow the non-invasive study of anatomical brain connectivity. However, a gold standard for validating tractography of complex connections is lacking. Using the porcine brain as a highly gyrated brain model, we quantitatively and qualitatively ass...

  20. Diagnosis of Bearing System using Minimum Variance Cepstrum

    International Nuclear Information System (INIS)

    Lee, Jeong Han; Choi, Young Chul; Park, Jin Ho; Lee, Won Hyung; Kim, Chan Joong

    2005-01-01

    Various bearings are commonly used in rotating machines. The noise and vibration signals that can be obtained from the machines often convey the information of faults and these locations. Monitoring conditions for bearings have received considerable attention for many years, because the majority of problems in rotating machines are caused by faulty bearings. Thus failure alarm for the bearing system is often based on the detection of the onset of localized faults. Many methods are available for detecting faults in the bearing system. The majority of these methods assume that faults in bearings produce impulses. Impulse events can be attributed to bearing faults in the system. McFadden and Smith used the bandpass filter to filter the noise signal and then obtained the envelope by using the envelope detector. D. Ho and R. B Randall also tried envelope spectrum to detect faults in the bearing system, but it is very difficult to find resonant frequency in the noisy environments. S. -K. Lee and P. R. White used improved ANC (adaptive noise cancellation) to find faults. The basic idea of this technique is to remove the noise from the measured vibration signal, but they are not able to show the theoretical foundation of the proposed algorithms. Y.-H. Kim et al. used a moving window. This algorithm is quite powerful in the early detection of faults in a ball bearing system, but it is difficult to decide initial time and step size of the moving window. The early fault signal that is caused by microscopic cracks is commonly embedded in noise. Therefore, the success of detecting fault signal is completely determined by a method's ability to distinguish signal and noise. In 1969, Capon coined maximum likelihood (ML) spectra which estimate a mixed spectrum consisting of line spectrum, corresponding to a deterministic random process, plus arbitrary unknown continuous spectrum. The unique feature of these spectra is that it can detect sinusoidal signal from noise. Our idea

  1. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Science.gov (United States)

    2010-04-28

    .... The employer must ensure that: (i) All sheaves revolve on shafts that rotate on bearings; and (ii) The... of damage or defects at all times. (b) Guide rope fastening and alignment tension. The employer must...

  2. Mean-Variance Efficiency of the Market Portfolio

    Directory of Open Access Journals (Sweden)

    Rafael Falcão Noda

    2014-06-01

    Full Text Available The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i the efficient frontier intersects with the market portfolio and (ii the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the adjusted parameters are not significantly different from the sample parameters, in line with the results of Levy and Roll (2010 for the USA stock market. Such results suggest that the imprecisions in the implementation of the CAPM stem mostly from parameter estimation errors and that other explanatory factors for returns may have low relevance. Therefore, our results contradict the above-mentioned criticisms to the CAPM in Brazil.

  3. Longitudinal variance of visceral fat thickness in pregnant adolescents.

    Science.gov (United States)

    Dutra, Luciana P; Cisneiros, Rosangela M; Souza, Alex S; Diniz, Carolina P; Moura, Laís A; Figueiroa, Jose N; Alves, João G B

    2014-02-01

    This study aims to investigate the longitudinal change in visceral fat thickness (VFT) during normal pregnancy. A prospective cohort study with 75 primiparous adolescents was carried out in Petrolina, Brazil. VFT was evaluated by ultrasound between 12-20 weeks gestation and immediately after delivery. We noted a statistically significant increase in VFT; 1.3 cm ± 1.0. No correlation was found between VFT and maternal anthropometric variables. VFT increases about 30% from the first to the second half of pregnancy in primiparous adolescents. © 2014 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  4. Origin and consequences of the relationship between protein mean and variance.

    Directory of Open Access Journals (Sweden)

    Francesco Luigi Massimo Vallania

    Full Text Available Cell-to-cell variance in protein levels (noise is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69. Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6 and accurately predicts protein variance across the yeast proteome (r2 = 0.935. Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  5. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  6. variances in consumers prices of selected food items among ...

    African Journals Online (AJOL)

    Admin

    For an individual commodity, the changes in non- price related factors cause instability in the price of the product. Product price instability among agricultural commodities is a regular phenomenon in markets across Nigeria (Akpan,. 2007). Instability in commodity prices among markets could be detrimental to the marketing.

  7. Variances in consumers prices of selected food items among ...

    African Journals Online (AJOL)

    ... had insignificant differences in their consumer prices while beans consumer prices had significant differences between Okurikang market and the other two markets. The results imply perfect information flow in garri and rice markets and hence high possibility of a perfectly competitive market structure for these products.

  8. Components of the metabolic syndrome: clustering and genetic variance

    NARCIS (Netherlands)

    Povel, C.M.

    2012-01-01

    Background Abdominal obesity, hyperglycemia, hypertriglyceridemia, low HDL cholesterol levels and hypertension frequently co-occur within individuals. The cluster of these features is referred to as the metabolic syndrome (MetS). The aim

  9. Extraction of slum areas from VHR imagery using GLCM variance

    NARCIS (Netherlands)

    Kuffer, M.; Pfeffer, K.; Sliuzas, R.; Baud, I.S.A.

    2016-01-01

    Many cities in the global South are facing the emergence and growth of highly dynamic slum areas, but often lack detailed information on these developments. Available statistical data are commonly aggregated to large, heterogeneous administrative units that are geographically meaningless for

  10. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  11. A Surface-Layer Study of the Transport and Dissipation of Turbulent Kinetic Energy and the Variances of Temperature, Humidity and CO_2

    Science.gov (United States)

    Hackerott, João A.; Bakhoday Paskyabi, Mostafa; Reuder, Joachim; de Oliveira, Amauri P.; Kral, Stephan T.; Marques Filho, Edson P.; Mesquita, Michel dos Santos; de Camargo, Ricardo

    2017-11-01

    We discuss scalar similarities and dissimilarities based on analysis of the dissipation terms in the variance budget equations, considering the turbulent kinetic energy and the variances of temperature, specific humidity and specific CO_2 content. For this purpose, 124 high-frequency sampled segments are selected from the Boundary Layer Late Afternoon and Sunset Turbulence experiment. The consequences of dissipation similarity in the variance transport are also discussed and quantified. The results show that, for the convective atmospheric surface layer, the non-dimensional dissipation terms can be expressed in the framework of Monin-Obukhov similarity theory and are independent of whether the variable is temperature or moisture. The scalar similarity in the dissipation term implies that the characteristic scales of the atmospheric surface layer can be estimated from the respective rate of variance dissipation, the characteristic scale of temperature, and the dissipation rate of temperature variance.

  12. Investigations of oligonucleotide usage variance within and between prokaryotes

    DEFF Research Database (Denmark)

    Bohlin, J.; Skjerve, E.; Ussery, David

    2008-01-01

    Oligonucleotide usage in archaeal and bacterial genomes can be linked to a number of properties, including codon usage (trinucleotides), DNA base-stacking energy (dinucleotides), and DNA structural conformation (di-to tetranucleotides). We wanted to assess the statistical information potential...... was that prokaryotic chromosomes can be described by hexanucleotide frequencies, suggesting that prokaryotic DNA is predominantly short range correlated, i. e., information in prokaryotic genomes is encoded in short oligonucleotides. Oligonucleotide usage varied more within AT-rich and host-associated genomes than...... in GC-rich and free-living genomes, and this variation was mainly located in non-coding regions. Bias (selectional pressure) in tetranucleotide usage correlated with GC content, and coding regions were more biased than non-coding regions. Non-coding regions were also found to be approximately 5.5% more...

  13. Detection of rheumatoid arthritis by evaluation of normalized variances of fluorescence time correlation functions

    Science.gov (United States)

    Dziekan, Thomas; Weissbach, Carmen; Voigt, Jan; Ebert, Bernd; MacDonald, Rainer; Bahner, Malte L.; Mahler, Marianne; Schirner, Michael; Berliner, Michael; Berliner, Birgitt; Osel, Jens; Osel, Ilka

    2011-07-01

    Fluorescence imaging using the dye indocyanine green as a contrast agent was investigated in a prospective clinical study for the detection of rheumatoid arthritis. Normalized variances of correlated time series of fluorescence intensities describing the bolus kinetics of the contrast agent in certain regions of interest were analyzed to differentiate healthy from inflamed finger joints. These values are determined using a robust, parameter-free algorithm. We found that the normalized variance of correlation functions improves the differentiation between healthy joints of volunteers and joints with rheumatoid arthritis of patients by about 10% compared to, e.g., ratios of areas under the curves of raw data.

  14. Genetic variance in processing speed drives variation in aging of spatial and memory abilities.

    Science.gov (United States)

    Finkel, Deborah; Reynolds, Chandra A; McArdle, John J; Hamagami, Fumiaki; Pedersen, Nancy L

    2009-05-01

    Previous analyses have identified a genetic contribution to the correlation between declines with age in processing speed and higher cognitive abilities. The goal of the current analysis was to apply the biometric dual change score model to consider the possibility of temporal dynamics underlying the genetic covariance between aging trajectories for processing speed and cognitive abilities. Longitudinal twin data from the Swedish Adoption/Twin Study of Aging, including up to 5 measurement occasions covering a 16-year period, were available from 806 participants ranging in age from 50 to 88 years at the 1st measurement wave. Factors were generated to tap 4 cognitive domains: verbal ability, spatial ability, memory, and processing speed. Model-fitting indicated that genetic variance for processing speed was a leading indicator of variation in age changes for spatial and memory ability, providing additional support for processing speed theories of cognitive aging. Copyright 2009 APA, all rights reserved

  15. The interpersonal problems of the socially avoidant: self and peer shared variance.

    Science.gov (United States)

    Rodebaugh, Thomas L; Gianoli, Mayumi Okada; Turkheimer, Eric; Oltmanns, Thomas F

    2010-05-01

    We demonstrate a means of conservatively combining self and peer data regarding personality pathology and interpersonal behavior through structural equation modeling, focusing on avoidant personality disorder traits as well as those of two comparison personality disorders (dependent and narcissistic). Assessment of the relationship between personality disorder traits and interpersonal problems based on either self or peer data alone would result in counterintuitive findings regarding avoidant personality disorder. In contrast, analysis of the variance shared between self and peer leads to results that are more in keeping with hypothetical relationships between avoidant traits and interpersonal problems. Similar results were found for both dependent personality disorder traits and narcissistic personality disorder traits, exceeding our expectations for this method.

  16. Algebraic aspects of evolution partial differential equation arising in the study of constant elasticity of variance model from financial mathematics

    Science.gov (United States)

    Motsepa, Tanki; Aziz, Taha; Fatima, Aeeman; Khalique, Chaudry Masood

    2018-03-01

    The optimal investment-consumption problem under the constant elasticity of variance (CEV) model is investigated from the perspective of Lie group analysis. The Lie symmetry group of the evolution partial differential equation describing the CEV model is derived. The Lie point symmetries are then used to obtain an exact solution of the governing model satisfying a standard terminal condition. Finally, we construct conservation laws of the underlying equation using the general theorem on conservation laws.

  17. A Paradox of Genetic Variance in Epigamic Traits: Beyond "Good Genes" View of Sexual Selection.

    Science.gov (United States)

    Radwan, Jacek; Engqvist, Leif; Reinhold, Klaus

    Maintenance of genetic variance in secondary sexual traits, including bizarre ornaments and elaborated courtship displays, is a central problem of sexual selection theory. Despite theoretical arguments predicting that strong sexual selection leads to a depletion of additive genetic variance, traits associated with mating success show relatively high heritability. Here we argue that because of trade-offs associated with the production of costly epigamic traits, sexual selection is likely to lead to an increase, rather than a depletion, of genetic variance in those traits. Such trade-offs can also be expected to contribute to the maintenance of genetic variation in ecologically relevant traits with important implications for evolutionary processes, e.g. adaptation to novel environments or ecological speciation. However, if trade-offs are an important source of genetic variation in sexual traits, the magnitude of genetic variation may have little relevance for the possible genetic benefits of mate choice.

  18. A general method for describing sources of variance in clinical trials, especially operator variance, in order to improve transfer of research knowledge to practice.

    Science.gov (United States)

    Chambers, David W; Leknius, Casimir; Reid, Laura

    2009-04-01

    The purpose of this study was to demonstrate how the skill level of the operator and the clinical challenge provided by the patient affect the outcomes of clinical research in ways that may have hidden influences on the applicability of that research to practice. Rigorous research designs that control or eliminate operator or patient factors as sources of variance achieve improved statistical significance for study hypotheses. These procedures, however, mask sources of variance that influence the applicability of the conclusions. There are summary data that can be added to reports of clinical trials to permit potential users of the findings to identify the most important sources of variation and to predict the likely outcomes of adopting products and procedures reported in the literature. Provisional crowns were constructed in a laboratory setting in a fully crossed, random-factor model with two levels of material (Treatment), two skill levels of students (Operator), and restorations of two levels of difficulty (Patient). The levels of the Treatment, Operator, and Patient factors used in the study were chosen to ensure that the findings from the study could be transferred to practice settings in a predictable fashion. The provisional crowns were scored independently by two raters using the criteria for technique courses in the school where the research was conducted. The Operator variable accounted for 38% of the variance, followed by Treatment-by-Operator interaction (17%), Treatment (17%), and other factors and their combinations in smaller amounts. Regression equations were calculated for each Treatment material that can be used to predict outcomes in various potential transfer applications. It was found that classical analyses for differences between materials (the Treatment variable) would yield inconsistent results under various sampling systems within the parameters of the study. Operator and Treatment-by-Operator interactions appear to be significant and

  19. The effect of sex on the mean and variance of fitness in facultatively sexual rotifers.

    Science.gov (United States)

    Becks, L; Agrawal, A F

    2011-03-01

    The evolution of sex is a classic problem in evolutionary biology. While this topic has been the focus of much theoretical work, there is a serious dearth of empirical data. A simple yet fundamental question is how sex affects the mean and variance in fitness. Despite its importance to the theory, this type of data is available for only a handful of taxa. Here, we report two experiments in which we measure the effect of sex on the mean and variance in fitness in the monogonont rotifer, Brachionus calyciflorus. Compared to asexually derived offspring, we find that sexual offspring have lower mean fitness and less genetic variance in fitness. These results indicate that, at least in the laboratory, there are both short- and long-term disadvantages associated with sexual reproduction. We briefly review the other available data and highlight the need for future work. © 2010 The Authors. Journal of Evolutionary Biology © 2010 European Society For Evolutionary Biology.

  20. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.

    Science.gov (United States)

    Xie, Xianchao; Kou, S C; Brown, Lawrence

    2016-03-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.

  1. Variance Distribution in Sibling Relationships: Advantages of Multilevel Modeling Using Full Sibling Groups.

    Science.gov (United States)

    Marciniak, Karyn

    2017-03-01

    The majority of research on sibling relationships has investigated only one or two siblings in a family, but there are many theoretical and methodological limitations to this single dyadic perspective. This study uses multiple siblings (541 adults) in 184 families, where 96 of these families had all siblings complete the study, to demonstrate the value in including full sibling groups when conducting research on sibling relationships. Two scales, positivity and willingness to sacrifice, are evaluated with a multilevel model to account for the nested nature of family relationships. The distribution of variance across three levels: relationship, individual, and family are computed, and results indicate that the relationship level explains the most variance in positivity, whereas the individual level explains the majority of variance in willingness to sacrifice. These distributions are affected by gender composition and family size. The results of this study highlight an important and often overlooked element of family research: The meaning of a scale changes based on its distribution of variance at these three levels. Researchers are encouraged to be cognizant of the variance distribution of their scales when studying sibling relationships and to incorporate more full sibling groups into their research methods and study design. © 2015 Family Process Institute.

  2. The genetic and environmental roots of variance in negativity toward foreign nationals.

    Science.gov (United States)

    Kandler, Christian; Lewis, Gary J; Feldhaus, Lea Henrike; Riemann, Rainer

    2015-03-01

    This study quantified genetic and environmental roots of variance in prejudice and discriminatory intent toward foreign nationals and examined potential mediators of these genetic influences: right-wing authoritarianism (RWA), social dominance orientation (SDO), and narrow-sense xenophobia (NSX). In line with the dual process motivational (DPM) model, we predicted that the two basic attitudinal and motivational orientations-RWA and SDO-would account for variance in out-group prejudice and discrimination. In line with other theories, we expected that NSX as an affective component would explain additional variance in out-group prejudice and discriminatory intent. Data from 1,397 individuals (incl. twins as well as their spouses) were analyzed. Univariate analyses of twins' and spouses' data yielded genetic (incl. contributions of assortative mating) and multiple environmental sources (i.e., social homogamy, spouse-specific, and individual-specific effects) of variance in negativity toward strangers. Multivariate analyses suggested an extension to the DPM model by including NSX in addition to RWA and SDO as predictor of prejudice and discrimination. RWA and NSX primarily mediated the genetic influences on the variance in prejudice and discriminatory intent toward foreign nationals. In sum, the findings provide the basis of a behavioral genetic framework integrating different scientific disciplines for the study of negativity toward out-groups.

  3. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  4. Comparison of Global Distributions of Zonal-Mean Gravity Wave Variance Inferred from Different Satellite Instruments

    Science.gov (United States)

    Preusse, Peter; Eckermann, Stephen D.; Offermann, Dirk; Jackman, Charles H. (Technical Monitor)

    2000-01-01

    Gravity wave temperature fluctuations acquired by the CRISTA instrument are compared to previous estimates of zonal-mean gravity wave temperature variance inferred from the LIMS, MLS and GPS/MET satellite instruments during northern winter. Careful attention is paid to the range of vertical wavelengths resolved by each instrument. Good agreement between CRISTA data and previously published results from LIMS, MLS and GPS/MET are found. Key latitudinal features in these variances are consistent with previous findings from ground-based measurements and some simple models. We conclude that all four satellite instruments provide reliable global data on zonal-mean gravity wave temperature fluctuations throughout the middle atmosphere.

  5. Thermal noise variance of a receive radiofrequency coil as a respiratory motion sensor.

    Science.gov (United States)

    Andreychenko, A; Raaijmakers, A J E; Sbrizzi, A; Crijns, S P M; Lagendijk, J J W; Luijten, P R; van den Berg, C A T

    2017-01-01

    Development of a passive respiratory motion sensor based on the noise variance of the receive coil array. Respiratory motion alters the body resistance. The noise variance of an RF coil depends on the body resistance and, thus, is also modulated by respiration. For the noise variance monitoring, the noise samples were acquired without and with MR signal excitation on clinical 1.5/3 T MR scanners. The performance of the noise sensor was compared with the respiratory bellow and with the diaphragm displacement visible on MR images. Several breathing patterns were tested. The noise variance demonstrated a periodic, temporal modulation that was synchronized with the respiratory bellow signal. The modulation depth of the noise variance resulting from the respiration varied between the channels of the array and depended on the channel's location with respect to the body. The noise sensor combined with MR acquisition was able to detect the respiratory motion for every k-space read-out line. Within clinical MR systems, the respiratory motion can be detected by the noise in receive array. The noise sensor does not require careful positioning unlike the bellow, any additional hardware, and/or MR acquisition. Magn Reson Med 77:221-228, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Inter- and intrarater reliability of ulna variance versus lunate subsidence measurements in Madelung deformity.

    Science.gov (United States)

    Farr, Sebastian; Bae, Donald S

    2015-01-01

    To assess inter- and intrarater reliability of both ulna variance and lunate subsidence measurement methods in a large consecutive series of children with Madelung deformity. Ulnar variance and lunate subsidence were measured on 41 standard anteroposterior wrist radiographs from 31 patients with Madelung deformity. The patients had a mean age of 13 years (range, 5-25) at the time of presentation. Two pediatric orthopedic hand/upper limb surgeons evaluated all radiographs twice in a 4-week interval using standard digital imaging software. Intraclass correlation coefficients (ICCs) were calculated for inter- and intrarater reliability, and results were reported using the Landis and Koch criteria. The interrater ICC for the ulna variance measurements was substantial, and for the lunate subsidence almost perfect. The intrarater ICC for ulna variance was substantial for both raters. In contrast, the intrarater ICC for lunate subsidence was almost perfect for both raters. Measurement of lunate subsidence showed both superior interrater and intrarater reliability compared with the ulnar variance method. Whenever relative ulna length is assessed in children and adolescents with Madelung deformity, the lunate subsidence should be the preferred method to characterize deformity. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  7. Phenotypic variance, plasticity and heritability estimates of critical thermal limits depend on methodological context

    DEFF Research Database (Denmark)

    Chown, Steven L.; Jumbam, Keafon R.; Sørensen, Jesper Givskov

    2009-01-01

    used during assessments of critical thermal limits to activity. To date, the focus of work has almost exclusively been on the effects of rate variation on mean values of the critical limits. 2.  If the rate of temperature change used in an experimental trial affects not only the trait mean but also its...... variance, estimates of heritable variation would also be profoundly affected. Moreover, if the outcomes of acclimation are likewise affected by methodological approach, assessment of beneficial acclimation and other hypotheses might also be compromised. 3.  In this article, we determined whether...... of temperature change resulted in different phenotypic variances and different estimates of heritability, presuming that genetic variance remains constant. We also found that different rates resulted in different conclusions regarding the responses of the species to acclimation, especially in the case of L...

  8. Technical criteria for an Area-Of-Review variance methodology. Appendix B

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-01-01

    This guidance was developed by the Underground Injection Practices Research Foundation to assist Underground Injection Control Directors in implementing proposed changes to EPA`s Class 2 Injection Well Regulations that will apply the Area-Of-Review (AOR) requirement to previously exempt wells. EPA plans to propose amendments this year consistent with the recommendations in the March 23, 1992, Final Document developed by the Class 2 Injection Well Advisory Committee, that will require AORs to be performed on all Class 2 injection wells except those covered by previously conducted AORs and those located in areas that have been granted a variance. Variances may be granted if the Director determines that there is a sufficiently low risk of upward fluid movement from the injection zone that could endanger underground sources of drinking water. This guidance contains suggested technical criteria for identifying areas eligible for an AOR variance. The suggested criteria were developed in consultation with interested States and representatives from EPA, industry and the academic community. Directors will have six months from the promulgation of the new regulations to provide EPA with either a schedule for performing AOR`s within five years on all wells not covered by previously conducted AORs, or notice of their intent to establish a variance program. It is believed this document will provide valuable assistance to Directors who are considering whether to establish a variance program or have begun early preparations to develop such a program.

  9. Trait Variance and Response Style Variance in the Scales of the Personality Inventory for DSM-5 (PID-5).

    Science.gov (United States)

    Ashton, Michael C; de Vries, Reinout E; Lee, Kibeom

    2017-01-01

    Using self- and observer reports on the Personality Inventory for DSM-5 (PID-5) and the HEXACO Personality Inventory-Revised (HEXACO-PI-R), we identified for each inventory several trait dimensions (each defined by both self- and observer reports on the facet-level scales belonging to the same domain) and 2 source dimensions (each defined by self-reports or by observer reports, respectively, on all facet-level scales). Results (N = 217) showed that the source dimensions of the PID-5 were very large (much larger than those of the HEXACO-PI-R), and suggest that self-report (or observer report) response styles substantially inflate the intercorrelations and the alpha reliabilities of the PID-5 scales. We discuss the meaning and the implications of the large PID-5 source components, and we suggest some methods of controlling their influence.

  10. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment

    Directory of Open Access Journals (Sweden)

    Mahmoud Shakouri

    2016-03-01

    Full Text Available The amount of electricity generated by Photovoltaic (PV systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in Supplementary materials. The application of these files can be generalized to variety of communities interested in investing on PV systems.

  11. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    Science.gov (United States)

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  12. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  13. Tip displacement variance of manipulator to simultaneous horizontal and vertical stochastic base excitations

    International Nuclear Information System (INIS)

    Rahi, A.; Bahrami, M.; Rastegar, J.

    2002-01-01

    The tip displacement variance of an articulated robotic manipulator to simultaneous horizontal and vertical stochastic base excitation is studied. The dynamic equations for an n-links manipulator subjected to both horizontal and vertical stochastic excitations are derived by Lagrangian method and decoupled for small displacement of joints. The dynamic response covariance of the manipulator links is computed in the coordinate frame attached to the base and then the principal variance of tip displacement is determined. Finally, simulation for a two-link planner robotic manipulator under base excitation is developed. Then sensitivity of the principal variance of tip displacement and tip velocity to manipulator configuration, damping, excitation parameters and manipulator links length are investigated

  14. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  15. The influence of mean climate trends and climate variance on beaver survival and recruitment dynamics.

    Science.gov (United States)

    Campbell, Ruairidh D; Nouvellet, Pierre; Newman, Chris; Macdonald, David W; Rosell, Frank

    2012-09-01

    Ecologists are increasingly aware of the importance of environmental variability in natural systems. Climate change is affecting both the mean and the variability in weather and, in particular, the effect of changes in variability is poorly understood. Organisms are subject to selection imposed by both the mean and the range of environmental variation experienced by their ancestors. Changes in the variability in a critical environmental factor may therefore have consequences for vital rates and population dynamics. Here, we examine ≥90-year trends in different components of climate (precipitation mean and coefficient of variation (CV); temperature mean, seasonal amplitude and residual variance) and consider the effects of these components on survival and recruitment in a population of Eurasian beavers (n = 242) over 13 recent years. Within climatic data, no trends in precipitation were detected, but trends in all components of temperature were observed, with mean and residual variance increasing and seasonal amplitude decreasing over time. A higher survival rate was linked (in order of influence based on Akaike weights) to lower precipitation CV (kits, juveniles and dominant adults), lower residual variance of temperature (dominant adults) and lower mean precipitation (kits and juveniles). No significant effects were found on the survival of nondominant adults, although the sample size for this category was low. Greater recruitment was linked (in order of influence) to higher seasonal amplitude of temperature, lower mean precipitation, lower residual variance in temperature and higher precipitation CV. Both climate means and variance, thus proved significant to population dynamics; although, overall, components describing variance were more influential than those describing mean values. That environmental variation proves significant to a generalist, wide-ranging species, at the slow end of the slow-fast continuum of life histories, has broad implications for

  16. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  17. Assessment of texture stationarity using the asymptotic behavior of the empirical mean and variance.

    Science.gov (United States)

    Blanc, Rémy; Da Costa, Jean-Pierre; Stitou, Youssef; Baylou, Pierre; Germain, Christian

    2008-09-01

    Given textured images considered as realizations of 2-D stochastic processes, a framework is proposed to evaluate the stationarity of their mean and variance. Existing strategies focus on the asymptotic behavior of the empirical mean and variance (respectively EM and EV), known for some types of nondeterministic processes. In this paper, the theoretical asymptotic behaviors of the EM and EV are studied for large classes of second-order stationary ergodic processes, in the sense of the Wold decomposition scheme, including harmonic and evanescent processes. Minimal rates of convergence for the EM and the EV are derived for these processes; they are used as criteria for assessing the stationarity of textures. The experimental estimation of the rate of convergence is achieved using a nonparametric block sub-sampling method. Our framework is evaluated on synthetic processes with stationary or nonstationary mean and variance and on real textures. It is shown that anomalies in the asymptotic behavior of the empirical estimators allow detecting nonstationarities of the mean and variance of the processes in an objective way.

  18. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP that decompo......This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  19. Estimation of the Mean of a Univariate Normal Distribution When the Variance is not Known

    NARCIS (Netherlands)

    Danilov, D.L.; Magnus, J.R.

    2002-01-01

    We consider the problem of estimating the first k coeffcients in a regression equation with k + 1 variables.For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002).We investigate properties of this estimator in

  20. The factor structure of the GHQ-12: the interaction between item phrasing, variance and levels of distress.

    Science.gov (United States)

    Smith, Adam B; Oluboyede, Yemi; West, Robert; Hewison, Jenny; House, Allan O

    2013-02-01

    The general health questionnaire-12 (GHQ-12) is a self-report instrument for measuring psychological morbidity. Previous work has suggested several multidimensional models for this instrument, although it has recently been proposed that these may be an artefact resulting from a response bias to negatively phrased items. The aim here was to explore the dimensionality of the GHQ-12. Cluster analysis, exploratory factor analysis and confirmatory factor analysis were applied to waves of data from the English longitudinal study of ageing (ELSA Waves 1 and 3), in order to evaluate fit and factorial invariance over time of the GHQ-12. Two categories of respondents were identified: high and low scorers. Item variances were higher across all items for high scorers and higher for negatively phrased items (for both high and low scorers). The unidimensional model accounting for variance observed with negative phrasing (Hankins in Clin Pract Epidemiol Ment Health 4:10, 2008) was identified as having the best model fit across the two time points. Item phrasing, item variance and levels of respondents' distress affect the factor structure observed for the GHQ-12 and may perhaps explain why different factor structures of the instrument have been found in different populations.

  1. The role of respondents’ comfort for variance in stated choice surveys

    DEFF Research Database (Denmark)

    Emang, Diana; Lundhede, Thomas; Thorsen, Bo Jellesmark

    2017-01-01

    parameter and vice versa. Information on, e.g., sleep and time since eating (higher comfort) correlated with scale heterogeneity, and produced lower error variance when controlled for in the model. That respondents’ comfort may influence choice behavior suggests that knowledge of the respondents’ activity......Preference elicitation among outdoor recreational users is subject to measurement errors that depend, in part, on survey planning. This study uses data from a choice experiment survey on recreational SCUBA diving to investigate whether self-reported information on respondents’ comfort when...... they complete surveys correlates with the error variance in stated choice models of their responses. Comfort-related variables are included in the scale functions of the scaled multinomial logit models. The hypothesis was that higher comfort reduces error variance in answers, as revealed by a higher scale...

  2. Bobtail: A Proof-of-Work Target that Minimizes Blockchain Mining Variance (Draft)

    OpenAIRE

    Bissias, George; Levine, Brian Neil

    2017-01-01

    Blockchain systems are designed to produce blocks at a constant average rate. The most popular systems currently employ a Proof of Work (PoW) algorithm as a means of creating these blocks. Bitcoin produces, on average, one block every 10 minutes. An unfortunate limitation of all deployed PoW blockchain systems is that the time between blocks has high variance. For example, 5% of the time, Bitcoin's inter-block time is at least 40 minutes. This variance impedes the consistent flow of validated...

  3. Evolution of Robustness and Plasticity under Environmental Fluctuation: Formulation in Terms of Phenotypic Variances

    Science.gov (United States)

    Kaneko, Kunihiko

    2012-09-01

    The characterization of plasticity, robustness, and evolvability, an important issue in biology, is studied in terms of phenotypic fluctuations. By numerically evolving gene regulatory networks, the proportionality between the phenotypic variances of epigenetic and genetic origins is confirmed. The former is given by the variance of the phenotypic fluctuation due to noise in the developmental process; and the latter, by the variance of the phenotypic fluctuation due to genetic mutation. The relationship suggests a link between robustness to noise and to mutation, since robustness can be defined by the sharpness of the distribution of the phenotype. Next, the proportionality between the variances is demonstrated to also hold over expressions of different genes (phenotypic traits) when the system acquires robustness through the evolution. Then, evolution under environmental variation is numerically investigated and it is found that both the adaptability to a novel environment and the robustness are made compatible when a certain degree of phenotypic fluctuations exists due to noise. The highest adaptability is achieved at a certain noise level at which the gene expression dynamics are near the critical state to lose the robustness. Based on our results, we revisit Waddington's canalization and genetic assimilation with regard to the two types of phenotypic fluctuations.

  4. The Evolution of Human Intelligence and the Coefficient of Additive Genetic Variance in Human Brain Size

    Science.gov (United States)

    Miller, Geoffrey F.; Penke, Lars

    2007-01-01

    Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…

  5. A real-time automatic contrast adjustment method for high-bit-depth cameras based on histogram variance analysis

    Science.gov (United States)

    Zhao, Jun; Lu, Jun

    2015-10-01

    In this paper we propose an efficient method to enhance contrast in real time in digital video streams by exploiting histogram variances and adaptively adjusting gamma curves. The proposed method aims to overcome the limitations of the conventional histogram equalization method, which often produces noisy, unrealistic effects in images. To improve visual quality, we use gamma correction technique and choose different gamma curves according to the histogram variance of the images. By using this scheme, the details of an image can be enhanced while the mean brightness level is kept. Experiment results demonstrate that our method is simple and efficient, and robust for both low and high dynamic scenes, and hence well suited for real-time, high-bit-depth video acquisitions.

  6. High Efficiency Computation of the Variances of Structural Evolutionary Random Responses

    Directory of Open Access Journals (Sweden)

    J.H. Lin

    2000-01-01

    Full Text Available For structures subjected to stationary or evolutionary white/colored random noise, their various response variances satisfy algebraic or differential Lyapunov equations. The solution of these Lyapunov equations used to be very difficult. A precise integration method is proposed in the present paper, which solves such Lyapunov equations accurately and very efficiently.

  7. The Effect of Some Estimators of Between-Study Variance on Random

    African Journals Online (AJOL)

    Samson Henry Dogo

    overall mean treatment effect. Random-effects model account for studies characteristics such as study design, different treatment protocols, gender and cultural difference between study participants by incorporating an additional source of variability called between-study variance 2 to variability due to sampling.

  8. Measurement Error Variance of Test-Day Obervations from Automatic Milking Systems

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik S

    2012-01-01

    Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined in the evaluat......Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined...

  9. The latitude dependence of the variance of zonally averaged quantities. [in polar meteorology with attention to geometrical effects of earth

    Science.gov (United States)

    North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.

    1982-01-01

    Geometric characteristics of the spherical earth are shown to be responsible for the increase of variance with latitude of zonally averaged meteorological statistics. An analytic model is constructed to display the effect of a spherical geometry on zonal averages, employing a sphere labeled with radial unit vectors in a real, stochastic field expanded in complex spherical harmonics. The variance of a zonally averaged field is found to be expressible in terms of the spectrum of the vector field of the spherical harmonics. A maximum variance is then located at the poles, and the ratio of the variance to the zonally averaged grid-point variance, weighted by the cosine of the latitude, yields the zonal correlation typical of the latitude. An example is provided for the 500 mb level in the Northern Hemisphere compared to 15 years of data. Variance is determined to increase north of 60 deg latitude.

  10. A Mean-Variance Diagnosis of the Financial Crisis: International Diversification and Safe Havens

    Directory of Open Access Journals (Sweden)

    Alexander Eptas

    2010-12-01

    Full Text Available We use mean-variance analysis with short selling constraints to diagnose the effects of the recent global financial crisis by evaluating the potential benefits of international diversification in the search for ‘safe havens’. We use stock index data for a sample of developed, advanced-emerging and emerging countries. ‘Text-book’ results are obtained for the pre-crisis analysis with the optimal portfolio for any risk-averse investor being obtained as the tangency portfolio of the All-Country portfolio frontier. During the crisis there is a disjunction between bank lending and stock markets revealed by negative average returns and an absence of any empirical Capital Market Line. Israel and Colombia emerge as the safest havens for any investor during the crisis. For Israel this may reflect the protection afforded by special trade links and diaspora support, while for Colombia we speculate that this reveals the impact on world financial markets of the demand for cocaine.

  11. Application of variance reduction technique to nuclear transmutation system driven by accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)

  12. Estimates of array and pool-construction variance for planning efficient DNA-pooling genome wide association studies

    Science.gov (United States)

    2011-01-01

    Background Until recently, genome-wide association studies (GWAS) have been restricted to research groups with the budget necessary to genotype hundreds, if not thousands, of samples. Replacing individual genotyping with genotyping of DNA pools in Phase I of a GWAS has proven successful, and dramatically altered the financial feasibility of this approach. When conducting a pool-based GWAS, how well SNP allele frequency is estimated from a DNA pool will influence a study's power to detect associations. Here we address how to control the variance in allele frequency estimation when DNAs are pooled, and how to plan and conduct the most efficient well-powered pool-based GWAS. Methods By examining the variation in allele frequency estimation on SNP arrays between and within DNA pools we determine how array variance [var(earray)] and pool-construction variance [var(econstruction)] contribute to the total variance of allele frequency estimation. This information is useful in deciding whether replicate arrays or replicate pools are most useful in reducing variance. Our analysis is based on 27 DNA pools ranging in size from 74 to 446 individual samples, genotyped on a collective total of 128 Illumina beadarrays: 24 1M-Single, 32 1M-Duo, and 72 660-Quad. Results For all three Illumina SNP array types our estimates of var(earray) were similar, between 3-4 × 10-4 for normalized data. Var(econstruction) accounted for between 20-40% of pooling variance across 27 pools in normalized data. Conclusions We conclude that relative to var(earray), var(econstruction) is of less importance in reducing the variance in allele frequency estimation from DNA pools; however, our data suggests that on average it may be more important than previously thought. We have prepared a simple online tool, PoolingPlanner (available at http://www.kchew.ca/PoolingPlanner/), which calculates the effective sample size (ESS) of a DNA pool given a range of replicate array values. ESS can be used in a power

  13. How large are actor and partner effects of personality on relationship satisfaction? The importance of controlling for shared method variance.

    Science.gov (United States)

    Orth, Ulrich

    2013-10-01

    Previous research suggests that the personality of a relationship partner predicts not only the individual's own satisfaction with the relationship but also the partner's satisfaction. Based on the actor-partner interdependence model, the present research tested whether actor and partner effects of personality are biased when the same method (e.g., self-report) is used for the assessment of personality and relationship satisfaction and, consequently, shared method variance is not controlled for. Data came from 186 couples, of whom both partners provided self- and partner reports on the Big Five personality traits. Depending on the research design, actor effects were larger than partner effects (when using only self-reports), smaller than partner effects (when using only partner reports), or of about the same size as partner effects (when using self- and partner reports). The findings attest to the importance of controlling for shared method variance in dyadic data analysis.

  14. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  15. Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods

    NARCIS (Netherlands)

    Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.

    2002-01-01

    If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been

  16. Penerapan Model Multivariat Analisis of Variance dalam Mengukur Persepsi Destinasi Wisata

    Directory of Open Access Journals (Sweden)

    Robert Tang Herman

    2012-05-01

    Full Text Available The purpose of this research is to provide conceptual and infrastructure tools for Dinas Pariwisata DKI Jakarta to improve their capabilities for evaluating business performance based on market responsiveness. Capturing market responsiveness is the initial research to make industry mapping. Research steps started with secondary research to build data classification system. The second is primary research by collecting the data from market research. Data sources for secondary data were collected from Dinas Pariwisata DKI, while the primary data were collected from survey method using quetionaires addressed to the whole market. Then, analyze the data colleted with multivariate analysis of variance to develop the mapping. The result of cluster analysis distinguishes the potential market based on their responses to the industry classification, make the classification system, find the gaps and how important are they, and the another issue related to the role of the mapping system. So, this mapping system will help Dinas Pariwisata DKI to improve capabilities and the business performance based on the market responsiveness and, which is the potential market for each specific classification, know what their needs, wants and demand from that classification. This research contribution can be used to give the recommendation to Dinas Pariwisata DKI to deliver what market needs and wants to all the tourism place based on this classification resulting, to develop the market growth estimation; and for the long term is to improve the economic and market growth.

  17. On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility

    OpenAIRE

    Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini

    2008-01-01

    We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.

  18. How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach

    Science.gov (United States)

    Feistauer, Daniela; Richter, Tobias

    2017-01-01

    The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…

  19. Genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations

    NARCIS (Netherlands)

    Mäki, K.; Groen, A.F.; Liinamo, A.E.; Ojala, M.

    2002-01-01

    The aims of this study were to assess genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations. The influence of time-dependent fixed effects in the model when estimating the genetic trends was also studied. Official hip and elbow dysplasia screening

  20. Saddlepoint approximations to the mean and variance of the extended hyper geometric distribution

    NARCIS (Netherlands)

    Eisinga, R.; Pelzer, B.

    2010-01-01

    Conditional inference on 2 x 2 tables with fixed margins and unequal probabilities is based on the extended hypergeometric distribution. If the support of the distribution is large, exact calculation of the conditional mean and variance of the table entry may be computationally demanding. This paper

  1. Firm Size and Growth Rate Variance: the Effects of Data Truncation

    NARCIS (Netherlands)

    Capasso, M.|info:eu-repo/dai/nl/314016627; Cefis, E.|info:eu-repo/dai/nl/274516233

    2010-01-01

    This paper discusses the effects of the existence of natural and/or exogenously imposed thresholds in firm size distributions, on estimations of the relation between firm size and variance in firm growth rates. We explain why the results in the literature on this relationship are not consistent. We

  2. Estimation of variance components and genetic trends for twinning rate in Holstein dairy cattle of Iran.

    Science.gov (United States)

    Ghavi Hossein-Zadeh, N; Nejati-Javaremi, A; Miraei-Ashtiani, S R; Kohram, H

    2009-07-01

    Calving records from the Animal Breeding Center of Iran, collected from January 1991 to December 2007 and comprising 1,163,594 Holstein calving events from 2,552 herds, were analyzed using a linear animal model, linear sire model, threshold animal model, and threshold sire model to estimate variance components, heritabilities, genetic correlations, and genetic trends for twinning rate in the first, second, and third parities. The overall twinning rate was 3.01%. Mean incidence of twins increased from first to fourth and later parities: 1.10, 3.20, 4.22, and 4.50%, respectively. For first-parity cows, a maximum frequency of twinning was observed from January through April (1.36%), and second- and third-parity cows showed peaks from July to September (at 3.35 and 4.55%, respectively). The phenotypic rate of twinning decreased from 1991 to 2007 for the first, second, and third parities. Sire predicted transmitting abilities were estimated using linear sire model and threshold sire model analyses. Sire transmitting abilities for twinning rate in the first, second, and third parities ranged from -0.30 to 0.42, -0.32 to 0.31, and -0.27 to 0.30, respectively. Heritability estimates of twinning rate for parities 1, 2, and 3 ranged from 1.66 to 10.6%, 1.35 to 9.0%, and 1.10 to 7.3%, respectively, using different models for analysis. Heritability estimates for twinning rate, obtained from the analysis of threshold models, were greater than the estimates of linear models. Solutions for age at calving for the first, second, and third parities demonstrated that cows older at calving were more likely to have twins. Genetic correlations for twinning rate between parities 2 and 3 were greater than correlations between parities 1 and 2 and between parities 1 and 3. There was a slightly increasing trend for twinning rate in parities 1, 2, and 3 over time with the analysis of linear animal and linear sire models, but the trend for twinning rate in parities 1, 2, and 3 with threshold

  3. The dynamics of integrate-and-fire: mean versus variance modulations and dependence on baseline parameters.

    Science.gov (United States)

    Pressley, Joanna; Troyer, Todd W

    2011-05-01

    The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.

  4. How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?

    Science.gov (United States)

    Gebregiorgis, A. S.; Hossain, F.

    2014-12-01

    The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.

  5. The mean and variance of environmental temperature interact to determine physiological tolerance and fitness.

    Science.gov (United States)

    Bozinovic, Francisco; Bastías, Daniel A; Boher, Francisca; Clavijo-Baquet, Sabrina; Estay, Sergio A; Angilletta, Michael J

    2011-01-01

    Global climate change poses one of the greatest threats to biodiversity. Most analyses of the potential biological impacts have focused on changes in mean temperature, but changes in thermal variance will also impact organisms and populations. We assessed the combined effects of the mean and variance of temperature on thermal tolerances, organismal survival, and population growth in Drosophila melanogaster. Because the performance of ectotherms relates nonlinearly to temperature, we predicted that responses to thermal variation (±0° or ±5°C) would depend on the mean temperature (17° or 24°C). Consistent with our prediction, thermal variation enhanced the rate of population growth (r(max)) at a low mean temperature but depressed this rate at a high mean temperature. The interactive effect on fitness occurred despite the fact that flies improved their heat and cold tolerances through acclimation to thermal conditions. Flies exposed to a high mean and a high variance of temperature recovered from heat coma faster and survived heat exposure better than did flies that developed at other conditions. Relatively high survival following heat exposure was associated with low survival following cold exposure. Recovery from chill coma was affected primarily by the mean temperature; flies acclimated to a low mean temperature recovered much faster than did flies acclimated to a high mean temperature. To develop more realistic predictions about the biological impacts of climate change, one must consider the interactions between the mean environmental temperature and the variance of environmental temperature.

  6. Relative variance of the mean-squared pressure in multimode media: rehabilitating former approaches.

    Science.gov (United States)

    Monsef, Florian; Cozza, Andrea; Rodrigues, Dominique; Cellard, Patrick; Durocher, Jean-Noel

    2014-11-01

    The commonly accepted model for the relative variance of transmission functions in room acoustics, derived by Weaver, aims at including the effects of correlation between eigenfrequencies. This model is based on an analytical expression of the relative variance derived by means of an approximated correlation function. The relevance of the approximation used for modeling such correlation is questioned here. Weaver's model was motivated by the fact that earlier models derived by Davy and Lyon assumed independent eigenfrequencies and led to an overestimation with respect to relative variances found in practice. It is shown here that this overestimation is due to an inadequate truncation of the modal expansion, and to an improper choice of the frequency range over which ensemble averages of the eigenfrequencies is defined. An alternative definition is proposed, settling the inconsistency; predicted relative variances are found to be in good agreement with experimental data. These results rehabilitate former approaches that were based on independence assumptions between eigenfrequencies. Some former studies showed that simpler correlation models could be used to predict the statistics of some field-related physical quantity at low modal overlap. The present work confirms that this is also the case when dealing with transmission functions.

  7. The problem of low variance voxels in statistical parametric mapping; a new hat avoids a 'haircut'.

    Science.gov (United States)

    Ridgway, Gerard R; Litvak, Vladimir; Flandin, Guillaume; Friston, Karl J; Penny, Will D

    2012-02-01

    Statistical parametric mapping (SPM) locates significant clusters based on a ratio of signal to noise (a 'contrast' of the parameters divided by its standard error) meaning that very low noise regions, for example outside the brain, can attain artefactually high statistical values. Similarly, the commonly applied preprocessing step of Gaussian spatial smoothing can shift the peak statistical significance away from the peak of the contrast and towards regions of lower variance. These problems have previously been identified in positron emission tomography (PET) (Reimold et al., 2006) and voxel-based morphometry (VBM) (Acosta-Cabronero et al., 2008), but can also appear in functional magnetic resonance imaging (fMRI) studies. Additionally, for source-reconstructed magneto- and electro-encephalography (M/EEG), the problems are particularly severe because sparsity-favouring priors constrain meaningfully large signal and variance to a small set of compactly supported regions within the brain. (Acosta-Cabronero et al., 2008) suggested adding noise to background voxels (the 'haircut'), effectively increasing their noise variance, but at the cost of contaminating neighbouring regions with the added noise once smoothed. Following theory and simulations, we propose to modify--directly and solely--the noise variance estimate, and investigate this solution on real imaging data from a range of modalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Implementation of variance-reduction techniques for Monte Carlo nuclear logging calculations with neutron sources

    NARCIS (Netherlands)

    Maucec, M

    2005-01-01

    Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented.

  9. Allan variance of frequency fluctuations due to momentum exchange and thermomechanical noises

    NARCIS (Netherlands)

    Palasantzas, George A.

    2007-01-01

    We investigate the Allan variance of nanoresonators with random rough surfaces under the simultaneous influence of thermomechanical and momentum exchange noises. Random roughness is observed in various surface engineering processes, and it is characterized by the roughness amplitude w, the lateral

  10. Selection for uniformity in livestock by exploiting genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2008-01-01

    In some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters,

  11. Selection for uniformity in livestock by exploiting genetic heterogeneity of residual variance

    NARCIS (Netherlands)

    Mulder, H.A.; Veerkamp, R.F.; Vereijken, A.; Bijma, P.; Hill, W.G.

    2008-01-01

    some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters,

  12. The Rise and Fall of S&P500 Variance Futures

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); J.A. Jiménez-Martín (Juan-Ángel); M.J. McAleer (Michael); T. Pérez-Amaral (Teodosio)

    2011-01-01

    textabstractModelling, monitoring and forecasting volatility are indispensible to sensible portfolio risk management. The volatility of an asset of composite index can be traded by using volatility derivatives, such as volatility and variance swaps, options and futures. The most popular volatility

  13. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simu......We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... of the percent determination of the Hubble constant in the local universe....

  14. Exploring authentic skim and nonfat dry milk powder variance for the development of nontargeted adulterant detection methods using near-infrared spectroscopy and chemometrics.

    Science.gov (United States)

    Botros, Lucy L; Jablonski, Joseph; Chang, Claire; Bergana, Marti Mamula; Wehling, Paul; Harnly, James M; Downey, Gerard; Harrington, Peter; Potts, Alan R; Moore, Jeffrey C

    2013-10-16

    A multinational collaborative team led by the U.S. Pharmacopeial Convention is currently investigating the potential of near-infrared (NIR) spectroscopy for nontargeted detection of adulterants in skim and nonfat dry milk powder. The development of a compendial method is challenged by the range of authentic or nonadulterated milk powders available worldwide. This paper investigates the sources of variance in 41 authentic bovine skim and nonfat milk powders as detected by NIR diffuse reflectance spectroscopy and chemometrics. Exploratory analysis by principal component analysis and varimax factor rotation revealed significant variance in authentic samples and highlighted outliers from a single manufacturer. Spectral preprocessing and outlier removal methods reduced ambient and measurement sources of variance, most likely linked to changes in moisture together with sampling, preparation, and presentation factors. Results indicate that significant chemical variance exists in different skim and nonfat milk powders that will likely affect the performance of adulterant detection methods by NIR spectroscopy.

  15. Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System

    Science.gov (United States)

    Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.

    2016-06-01

    Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading

  16. The Allan variance in the presence of a compound Poisson process modelling clock frequency jumps

    Science.gov (United States)

    Formichella, Valerio

    2016-12-01

    Atomic clocks can be affected by frequency jumps occurring at random times and with a random amplitude. The frequency jumps degrade the clock stability and this is captured by the Allan variance. In this work we assume that the random jumps can be modelled by a compound Poisson process, independent of the other stochastic and deterministic processes affecting the clock stability. Then, we derive the analytical expression of the Allan variance of a jumping clock. We find that the analytical Allan variance does not depend on the actual shape of the jumps amplitude distribution, but only on its first and second moments, and its final form is the same as for a clock with a random walk of frequency and a frequency drift. We conclude that the Allan variance cannot distinguish between a compound Poisson process and a Wiener process, hence it may not be sufficient to correctly identify the fundamental noise processes affecting a clock. The result is general and applicable to any oscillator, whose frequency is affected by a jump process with the described statistics.

  17. Mean-Variance Portfolio Selection with a Fixed Flow of Investment in ...

    African Journals Online (AJOL)

    We consider a mean-variance portfolio selection problem for a fixed flow of investment in a continuous time framework. We consider a market structure that is characterized by a cash account, an indexed bond and a stock. We obtain the expected optimal terminal wealth for the investor. We also obtain a closed-form ...

  18. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling

    Directory of Open Access Journals (Sweden)

    Ze Yu

    2016-07-01

    Full Text Available Compared with low-Earth orbit synthetic aperture radar (SAR, a geosynchronous (GEO SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath.

  19. Ulnar variance as a predictor of persistent instability following Galeazzi fracture-dislocations.

    Science.gov (United States)

    Takemoto, Richelle; Sugi, Michelle; Immerman, Igor; Tejwani, Nirmal; Egol, Kenneth A

    2014-03-01

    We investigated the radiographic parameters that may predict distal radial ulnar joint (DRUJ) instability in surgically treated radial shaft fractures. In our clinical experience, there are no previously reported radiographic parameters that are universally predictive of DRUJ instability following radial shaft fracture. Fifty consecutive patients, ages 20-79 years, with unilateral radial shaft fractures and possible associated DRUJ injury were retrospectively identified over a 5-year period. Distance from radial carpal joint (RCJ) to fracture proportional to radial shaft length, ulnar variance, and ulnar styloid fractures were correlated with DRUJ instability after surgical treatment. Twenty patients had persistent DRUJ incongruence/instability following fracture fixation. As a proportion of radial length, the distance from the RCJ to the fracture line did not significantly differ between those with persistent DRUJ instability and those without (p = 0.34). The average initial ulnar variance was 5.5 mm (range 2-12 mm, SD = 3.2) in patients with DRUJ instability and 3.8 mm (range 0-11 mm, SD = 3.5) in patients without. Only 4/20 patients (20%) with DRUJ instability had normal ulnar variance (-2 to +2 mm) versus 15/30 (50%) patients without (p = 0.041). In the setting of a radial shaft fracture, ulnar variance greater or less than 2 mm was associated with a greater likelihood of DRUJ incongruence/instability following fracture fixation.

  20. A Mean-Variance Explanation of FDI Flows to Developing Countries

    DEFF Research Database (Denmark)

    Sunesen, Eva Rytter

    country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

  1. An evaluation of how downscaled climate data represents historical precipitation characteristics beyond the means and variances

    CSIR Research Space (South Africa)

    Kusangaya, S

    2016-09-01

    Full Text Available represented the underlying historical precipitation characteristics beyond the means and variances. Using the uMngeni Catchment in KwaZulu-Natal, South Africa as a case study, the occurrence of rainfall, rainfall threshold events and wet dry sequence...

  2. Do exchange rates follow random walks? A variance ratio test of the ...

    African Journals Online (AJOL)

    The random-walk hypothesis in foreign-exchange rates market is one of the most researched areas, particularly in developed economies. However, emerging markets in sub-Saharan Africa have received little attention in this regard. This study applies Lo and MacKinlay's (1988) conventional variance ratio test and Wright's ...

  3. The stability of spectroscopic instruments : a unified Allan variance computation scheme

    NARCIS (Netherlands)

    Ossenkopf, V.

    Context. The Allan variance is a standard technique to characterise the stability of spectroscopic instruments used in astronomical observations. The period for switching between source and reference measurement is often derived from the Allan minimum time. However, various methods are applied to

  4. A bootstrap test for comparing two variances: simulation of size and power in small samples.

    Science.gov (United States)

    Sun, Jiajing; Chernick, Michael R; LaBudde, Robert A

    2011-11-01

    An F statistic was proposed by Good and Chernick ( 1993 ) in an unpublished paper, to test the hypothesis of the equality of variances from two independent groups using the bootstrap; see Hall and Padmanabhan ( 1997 ), for a published reference where Good and Chernick ( 1993 ) is discussed. We look at various forms of bootstrap tests that use the F statistic to see whether any or all of them maintain the nominal size of the test over a variety of population distributions when the sample size is small. Chernick and LaBudde ( 2010 ) and Schenker ( 1985 ) showed that bootstrap confidence intervals for variances tend to provide considerably less coverage than their theoretical asymptotic coverage for skewed population distributions such as a chi-squared with 10 degrees of freedom or less or a log-normal distribution. The same difficulties may be also be expected when looking at the ratio of two variances. Since bootstrap tests are related to constructing confidence intervals for the ratio of variances, we simulated the performance of these tests when the population distributions are gamma(2,3), uniform(0,1), Student's t distribution with 10 degrees of freedom (df), normal(0,1), and log-normal(0,1) similar to those used in Chernick and LaBudde ( 2010 ). We find, surprisingly, that the results for the size of the tests are valid (reasonably close to the asymptotic value) for all the various bootstrap tests. Hence we also conducted a power comparison, and we find that bootstrap tests appear to have reasonable power for testing equivalence of variances.

  5. Cusping, transport and variance of solutions to generalized Fokker-Planck equations

    Science.gov (United States)

    Carnaffan, Sean; Kawai, Reiichiro

    2017-06-01

    We study properties of solutions to generalized Fokker-Planck equations through the lens of the probability density functions of anomalous diffusion processes. In particular, we examine solutions in terms of their cusping, travelling wave behaviours, and variance, within the framework of stochastic representations of generalized Fokker-Planck equations. We give our analysis in the cases of anomalous diffusion driven by the inverses of the stable, tempered stable and gamma subordinators, demonstrating the impact of changing the distribution of waiting times in the underlying anomalous diffusion model. We also analyse the cases where the underlying anomalous diffusion contains a Lévy jump component in the parent process, and when a diffusion process is time changed by an uninverted Lévy subordinator. On the whole, we present a combination of four criteria which serve as a theoretical basis for model selection, statistical inference and predictions for physical experiments on anomalously diffusing systems. We discuss possible applications in physical experiments, including, with reference to specific examples, the potential for model misclassification and how combinations of our four criteria may be used to overcome this issue.

  6. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.

    1998-03-01

    `MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  7. On the origins of signal variance in FMRI of the human midbrain at high field.

    Directory of Open Access Journals (Sweden)

    Robert L Barry

    Full Text Available Functional Magnetic Resonance Imaging (fMRI in the midbrain at 7 Tesla suffers from unexpectedly low temporal signal to noise ratio (TSNR compared to other brain regions. Various methodologies were used in this study to quantitatively identify causes of the noise and signal differences in midbrain fMRI data. The influence of physiological noise sources was examined using RETROICOR, phase regression analysis, and power spectral analyses of contributions in the respiratory and cardiac frequency ranges. The impact of between-shot phase shifts in 3-D multi-shot sequences was tested using a one-dimensional (1-D phase navigator approach. Additionally, the effects of shared noise influences between regions that were temporally, but not functionally, correlated with the midbrain (adjacent white matter and anterior cerebellum were investigated via analyses with regressors of 'no interest'. These attempts to reduce noise did not improve the overall TSNR in the midbrain. In addition, the steady state signal and noise were measured in the midbrain and the visual cortex for resting state data. We observed comparable steady state signals from both the midbrain and the cortex. However, the noise was 2-3 times higher in the midbrain relative to the cortex, confirming that the low TSNR in the midbrain was not due to low signal but rather a result of large signal variance. These temporal variations did not behave as known physiological or other noise sources, and were not mitigated by conventional strategies. Upon further investigation, resting state functional connectivity analysis in the midbrain showed strong intrinsic fluctuations between homologous midbrain regions. These data suggest that the low TSNR in the midbrain may originate from larger signal fluctuations arising from functional connectivity compared to cortex, rather than simply reflecting physiological noise.

  8. Fluctuations of charge variance and interaction time for dissipative processes in 27 Al + 27 Al collision

    International Nuclear Information System (INIS)

    Berceanu, I.; Andronic, A.; Duma, M.

    1999-01-01

    The systematic studies of dissipative processes in light systems were completed with experiments dedicated to the measurement of the excitation functions in 19 F + 27 Al and 27 Al + 27 Al systems in order to obtain deeper insight on DNS configuration and its time evolution. The excitation function for 19 F + 27 Al system evidenced fluctuations larger than the statistical errors. Large Z and angular cross correlation coefficients supported their non-statistical nature. The energy dependence of second order observables, namely the second moment of the charge distribution and the product ω·τ (ω - the angular velocity of the DNS and τ its mean lifetime) extracted from the angular distributions were studied for 19 F + 27 Al case. In this contribution we are reporting the preliminary results of similar studies performed for 27 Al + 27 Al case. The variance of the charge distribution were obtained fitting the experimental charge distribution with a Gaussian centered on Z = 13 and the product ω·τ was extracted from the angular distributions. The results for 19 F + 27 Al case are confirmed by a preliminary analysis of the data for 27 Al + 27 Al system. The charge variance and ω·τ excitation functions for Z = 11 fragment are represented together with the excitation function of the cross section. One has to mention that the data for 27 Al + 27 Al system were not corrected for particle evaporation processes. The effect of the evaporation corrections on the excitation function was studied using a Monte Carlo simulation. The α particle evaporation was also included and the evaluation of the particle separation energies was made using experimental masses of the fragments. The excitation functions for 27 Al + 27 Al system for primary and secondary fragments were simulated. No structure due to particle evaporation was observed. The correlated fluctuations in σ Z and ω·τ excitation functions support a stochastic exchange of nucleons as the main mechanism for

  9. Cup anemometer response to the wind turbulence-measurement of the horizontal wind variance

    Directory of Open Access Journals (Sweden)

    S. Yahaya

    2004-11-01

    Full Text Available This paper presents some dynamic characteristics of an opto-electronic cup anemometer model in relation to its response to the wind turbulence. It is based on experimental data of the natural wind turbulence measured both by an ultrasonic anemometer and two samples of the mentioned cup anemometer. The distance constants of the latter devices measured in a wind tunnel are in good agreement with those determined by the spectral analysis method proposed in this study. In addition, the study shows that the linear compensation of the cup anemometer response, beyond the cutoff frequency, is limited to a given frequency, characteristic of the device. Beyond this frequency, the compensation effectiveness relies mainly on the wind characteristics, particularly the direction variability and the horizontal turbulence intensity. Finally, this study demonstrates the potential of fast cup anemometers to measure some turbulence parameters (like wind variance with errors of the magnitude as those deriving from the mean speed measurements. This result proves that fast cup anemometers can be used to assess some turbulence parameters, especially for long-term measurements in severe climate conditions (icing, snowing or sandy storm weathers.

  10. Explaining variance in self-directed learning readiness of first year students in health professional programs

    Directory of Open Access Journals (Sweden)

    Craig E. Slater

    2017-11-01

    Full Text Available Abstract Background Self-directed learning (SDL is expected of health science graduates; it is thus a learning outcome in many pre-certification programs. Previous research identified age, gender, discipline and prior education as associated with variations in students’ self-directed learning readiness (SDLR. Studies in other fields also propose personality as influential. Method This study investigated relationships between SDLR and age, gender, discipline, previous education, and personality traits. The Self-Directed Learning Readiness Scale and the 50-item ‘big five’ personality trait inventory were administered to 584 first-year undergraduate students (n = 312 female enrolled in a first-session undergraduate interprofessional health sciences subject. Results Students were from health promotion, health services management, therapeutic recreation, sports and exercise science, occupational therapy, physiotherapy, and podiatry. Four hundred and seven responses (n = 230 females were complete. SDLR was significantly higher in females and students in occupational therapy and physiotherapy. SDLR increased with age and higher levels of previous education. It was also significantly associated with ‘big five’ personality trait scores. Regression analysis revealed 52.9% of variance was accounted for by personality factors, discipline and prior experience of tertiary education. Conclusion Demographic, discipline and personality factors are associated with SDLR in the first year of study. Teachers need to be alert to individual student variation in SDLR.

  11. Explaining variance in self-directed learning readiness of first year students in health professional programs.

    Science.gov (United States)

    Slater, Craig E; Cusick, Anne; Louie, Jimmy C Y

    2017-11-13

    Self-directed learning (SDL) is expected of health science graduates; it is thus a learning outcome in many pre-certification programs. Previous research identified age, gender, discipline and prior education as associated with variations in students' self-directed learning readiness (SDLR). Studies in other fields also propose personality as influential. This study investigated relationships between SDLR and age, gender, discipline, previous education, and personality traits. The Self-Directed Learning Readiness Scale and the 50-item 'big five' personality trait inventory were administered to 584 first-year undergraduate students (n = 312 female) enrolled in a first-session undergraduate interprofessional health sciences subject. Students were from health promotion, health services management, therapeutic recreation, sports and exercise science, occupational therapy, physiotherapy, and podiatry. Four hundred and seven responses (n = 230 females) were complete. SDLR was significantly higher in females and students in occupational therapy and physiotherapy. SDLR increased with age and higher levels of previous education. It was also significantly associated with 'big five' personality trait scores. Regression analysis revealed 52.9% of variance was accounted for by personality factors, discipline and prior experience of tertiary education. Demographic, discipline and personality factors are associated with SDLR in the first year of study. Teachers need to be alert to individual student variation in SDLR.

  12. Estimation of stable boundary-layer height using variance processing of backscatter lidar data

    Science.gov (United States)

    Saeed, Umar; Rocadenbosch, Francesc

    2017-04-01

    Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.

  13. Student, teacher, and classroom predictors of between-teacher variance of students' teacher-rated behavior.

    Science.gov (United States)

    Splett, Joni W; Smith-Millman, Marissa; Raborn, Anthony; Brann, Kristy L; Flaspohler, Paul D; Maras, Melissa A

    2018-03-08

    The current study examined between-teacher variance in teacher ratings of student behavioral and emotional risk to identify student, teacher and classroom characteristics that predict such differences and can be considered in future research and practice. Data were taken from seven elementary schools in one school district implementing universal screening, including 1,241 students rated by 68 teachers. Students were mostly African America (68.5%) with equal gender (female 50.1%) and grade-level distributions. Teachers, mostly White (76.5%) and female (89.7%), completed both a background survey regarding their professional experiences and demographic characteristics and the Behavior Assessment System for Children (Second Edition) Behavioral and Emotional Screening System-Teacher Form for all students in their class, rating an average of 17.69 students each. Extant student data were provided by the district. Analyses followed multilevel linear model stepwise model-building procedures. We detected a significant amount of variance in teachers' ratings of students' behavioral and emotional risk at both student and teacher/classroom levels with student predictors explaining about 39% of student-level variance and teacher/classroom predictors explaining about 20% of between-teacher differences. The final model fit the data (Akaike information criterion = 8,687.709; pseudo-R2 = 0.544) significantly better than the null model (Akaike information criterion = 9,457.160). Significant predictors included student gender, race ethnicity, academic performance and disciplinary incidents, teacher gender, student-teacher gender interaction, teacher professional development in behavior screening, and classroom academic performance. Future research and practice should interpret teacher-rated universal screening of students' behavioral and emotional risk with consideration of the between-teacher variance unrelated to student behavior detected. (PsycINFO Database Record (c) 2018 APA, all

  14. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    Science.gov (United States)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  15. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    International Nuclear Information System (INIS)

    Jayamani, J; Aziz, M Z Abdul; Termizi, N A S Mohd; Kamarulzaman, F N Mohd

    2017-01-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 10 7 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 10 7 to 20 × 10 7 . In this study, 5 MeV electron cut-off with 10 × 10 7 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy. (paper)

  16. Stud identity among female-born youth of color: joint conceptualizations of gender variance and same-sex sexuality.

    Science.gov (United States)

    Kuper, Laura E; Wright, Laurel; Mustanski, Brian

    2014-01-01

    Little is known about the experiences of individuals who may fall under the umbrella of "transgender" but do not transition medically and/or socially. The impact of the increasingly widespread use of the term "transgender" itself also remains unclear. The authors present narratives from four female-born youth of color who report a history of identifying as a "stud." Through analysis of their processes of identity signification, the authors demonstrate how stud identity fuses aspects of gender and sexuality while providing an alternate way of making meaning of gender variance. As such, this identity has important implications for research and organizing centered on an LGBT-based identity framework.

  17. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen

    2016-01-01

    BACKGROUND: Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland...... relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. METHODS: In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed...... (THG) in study 2. RESULTS: In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter...

  18. On estimation of the noise variance in high-dimensional linear models

    OpenAIRE

    Golubev, Yuri; Krymova, Ekaterina

    2017-01-01

    We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).

  19. Matrix attachment regions (MARs) enhance transformation frequencies and reduce variance of transgene expression in barley

    DEFF Research Database (Denmark)

    Petersen, K.; Leah, R.; Knudsen, S.

    2002-01-01

    Nuclear matrix attachment regions (MARs) are defined as genomic DNA sequences, located at the physical boundaries of chromatin loops. They are suggested to play a role in the cis unfolding and folding of the chromatin fibre associated with the regulation of gene transcription. Inclusion of MARs i....... The presence of P1-MAR sequences increased the mean activity and reduced the variance in expression of a co-integrated reporter gene in barley consistent with the proposed model of MAR activity....

  20. A method for blind automatic evaluation of noise variance in images based on bootstrap and myriad operations

    Science.gov (United States)

    Lukin, Vladimir V.; Abramov, Sergey K.; Vozel, Benoit; Chehdi, Kacem

    2005-10-01

    Multichannel (multispectral) remote sensing (MRS) is widely used for various applications nowadays. However, original images are commonly corrupted by noise and other distortions. This prevents reliable retrieval of useful information from remote sensing data. Because of this, image pre-filtering and/or reconstruction are typical stages of multichannel image processing. And majority of modern efficient methods for image pre-processing requires availability of a priori information concerning noise type and its statistical characteristics. Thus, there is a great need in automatic blind methods for determination of noise type and its characteristics. However, almost all such methods fail to perform appropriately well if an image under consideration contains a large percentage of texture regions, details and edges. In this paper we demonstrate that by applying bootstrap it is possible to obtain rather accurate estimates of noise variance that can be used either as the final or preliminary ones. Different quantiles (order statistics) are used as initial estimates of mode location for distribution of noise variance local estimations and then bootstrap is applied for their joint analysis. To further improve accuracy of noise variance estimations, it is proposed under certain condition to apply myriad operation with tunable parameter k set in accordance with preliminary estimate obtained by bootstrap. Numerical simulation results confirm applicability of the proposed approach and produce data allowing to evaluate method accuracy.

  1. Regional heterogeneity and gene flow maintain variance in a quantitative trait within populations of lodgepole pine

    Science.gov (United States)

    Yeaman, Sam; Jarvis, Andy

    2006-01-01

    Genetic variation is of fundamental importance to biological evolution, yet we still know very little about how it is maintained in nature. Because many species inhabit heterogeneous environments and have pronounced local adaptations, gene flow between differently adapted populations may be a persistent source of genetic variation within populations. If this migration–selection balance is biologically important then there should be strong correlations between genetic variance within populations and the amount of heterogeneity in the environment surrounding them. Here, we use data from a long-term study of 142 populations of lodgepole pine (Pinus contorta) to compare levels of genetic variation in growth response with measures of climatic heterogeneity in the surrounding region. We find that regional heterogeneity explains at least 20% of the variation in genetic variance, suggesting that gene flow and heterogeneous selection may play an important role in maintaining the high levels of genetic variation found within natural populations. PMID:16769628

  2. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  3. Use of variance techniques to measure dry air-surface exchange rates

    Science.gov (United States)

    Wesely, M. L.

    1988-07-01

    The variances of fluctuations of scalar quantities can be measured and interpreted to yield indirect estimates of their vertical fluxes in the atmospheric surface layer. Strong correlations among scalar fluctuations indicate a similarity of transfer mechanisms, which is utilized in some of the variance techniques. The ratios of the standard deviations of two scalar quantities, for example, can be used to estimate the flux of one if the flux of the other is measured, without knowledge of atmospheric stability. This is akin to a modified Bowen ratio approach. Other methods such as the normalized standard-deviation technique and the correlation-coefficient technique can be utilized effectively if atmospheric stability is evaluated and certain semi-empirical functions are known. In these cases, iterative calculations involving measured variances of fluctuations of temperature and vertical wind velocity can be used in place of direct flux measurements. For a chemical sensor whose output is contaminated by non-atmospheric noise, covariances with fluctuations of scalar quantities measured with a very good signal-to-noise ratio can be used to extract the needed standard deviation. Field measurements have shown that many of these approaches are successful for gases such as ozone and sulfur dioxide, as well as for temperature and water vapor, and could be extended to other trace substances. In humid areas, it appears that water vapor fluctuations often have a higher degree of correlation to fluctuations of other trace gases than do temperature fluctuations; this makes water vapor a more reliable companion or “reference” scalar. These techniques provide some reliable research approaches but, for routine or operational measurement, they are limited by the need for fast-response sensors. Also, all variance approaches require some independent means to estimate the direction of the flux.

  4. Increasing the genetic variance of rice protein through mutation breeding techniques

    International Nuclear Information System (INIS)

    Ismachin, M.

    1975-01-01

    Recommended rice variety in Indonesia, Pelita I/1 was treated with gamma rays at the doses of 20 krad, 30 krad, and 40 krad. The seeds were also treated with EMS 1%. In M 2 generation, the protein content of seeds from the visible mutants and from the normal looking plants were analyzed by DBC method. No significant increase in the genetic variance was found on the samples treated with 20 krad gamma, and on the normal looking plants treated by EMS 1%. The mean value of the treated samples were mostly significant decrease compared with the mean value of the protein distribution in untreated samples (control). Since significant increase in genetic variance was also found in M 2 normal looking plants - treated with gamma at the doses of 30 krad and 40 krad -selection of protein among these materials could be more valuable. (author)

  5. Automated Clutch of AMT Vehicle Based on Adaptive Generalized Minimum Variance Controller

    Directory of Open Access Journals (Sweden)

    Ze Li

    2014-11-01

    Full Text Available Due to the influence of non-linear dynamic characteristic of clutch, external disturbance and parameter variation, the automated clutch is hard to control precisely during the engaging process of the automated clutch of automatic mechanical transmission vehicle. In this paper, adaptive generalized minimum variance controller is applied to the automated clutch which is driven by a brushless DC motor. The simulation results showed that the proposed controller is effective and robust to the parametric variation and external disturbance.

  6. Sample correlations of infinite variance time series models: an empirical and theoretical study

    Directory of Open Access Journals (Sweden)

    Jason Cohen

    1998-01-01

    Full Text Available When the elements of a stationary ergodic time series have finite variance the sample correlation function converges (with probability 1 to the theoretical correlation function. What happens in the case where the variance is infinite? In certain cases, the sample correlation function converges in probability to a constant, but not always. If within a class of heavy tailed time series the sample correlation functions do not converge to a constant, then more care must be taken in making inferences and in model selection on the basis of sample autocorrelations. We experimented with simulating various heavy tailed stationary sequences in an attempt to understand what causes the sample correlation function to converge or not to converge to a constant. In two new cases, namely the sum of two independent moving averages and a random permutation scheme, we are able to provide theoretical explanations for a random limit of the sample autocorrelation function as the sample grows.

  7. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    Science.gov (United States)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  8. The contribution of the mitochondrial genome to sex-specific fitness variance.

    Science.gov (United States)

    Smith, Shane R T; Connallon, Tim

    2017-05-01

    Maternal inheritance of mitochondrial DNA (mtDNA) facilitates the evolutionary accumulation of mutations with sex-biased fitness effects. Whereas maternal inheritance closely aligns mtDNA evolution with natural selection in females, it makes it indifferent to evolutionary changes that exclusively benefit males. The constrained response of mtDNA to selection in males can lead to asymmetries in the relative contributions of mitochondrial genes to female versus male fitness variation. Here, we examine the impact of genetic drift and the distribution of fitness effects (DFE) among mutations-including the correlation of mutant fitness effects between the sexes-on mitochondrial genetic variation for fitness. We show how drift, genetic correlations, and skewness of the DFE determine the relative contributions of mitochondrial genes to male versus female fitness variance. When mutant fitness effects are weakly correlated between the sexes, and the effective population size is large, mitochondrial genes should contribute much more to male than to female fitness variance. In contrast, high fitness correlations and small population sizes tend to equalize the contributions of mitochondrial genes to female versus male variance. We discuss implications of these results for the evolution of mitochondrial genome diversity and the genetic architecture of female and male fitness. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  9. Estimating the variance and integral scale of the transmissivity field using head residual increments

    Science.gov (United States)

    Zheng, Lingyun; Silliman, S.E.

    2000-01-01

    A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first-order solution is developed for the case of a transmissivity field which is isotropic and whose second-order behavior can be characterized by an exponential covariance structure. The estimates of the variance ??(Y)/2 and the integral scale ?? of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two-dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (??(Y)/2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both ?? and ??(Y)/2 in aquifers with the value of ??(Y)/2 up to 2.0, but the errors in those estimates were higher for ??(Y)/2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.

  10. Evolution of sociality by natural selection on variances in reproductive fitness: evidence from a social bee

    OpenAIRE

    Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P

    2007-01-01

    Abstract Background The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of ...

  11. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.

  12. Using discrete choice experiments to understand preferences for quality of life. Variance-scale heterogeneity matters.

    Science.gov (United States)

    Flynn, Terry Nicholas; Louviere, Jordan J; Peters, Tim J; Coast, Joanna

    2010-06-01

    Health services researchers are increasingly using discrete choice experiments (DCEs) to model a latent variable, be it health, health-related quality of life or utility. Unfortunately it is not widely recognised that failure to model variance heterogeneity correctly leads to bias in the point estimates. This paper compares variance heterogeneity latent class models with traditional multinomial logistic (MNL) regression models. Using the ICECAP-O quality of life instrument which was designed to provide a set of preference-based general quality of life tariffs for the UK population aged 65+, it demonstrates that there is both mean and variance heterogeneity in preferences for quality of life, which covariate-adjusted MNL is incapable of separating. Two policy-relevant mean groups were found: one group that particularly disliked impairments to independence was dominated by females living alone (typically widows). Males who live alone (often widowers) did not display a preference for independence, but instead showed a strong aversion to social isolation, as did older people (of either sex) who lived with a spouse. Approximately 6-10% of respondents can be classified into a third group that often misunderstood the task. Having a qualification of any type and higher quality of life was associated with smaller random component variances. This illustrates how better understanding of random utility theory enables richer inferences to be drawn from discrete choice experiments. The methods have relevance for all health studies using discrete choice tasks to make inferences about a latent scale, particular QALY valuation exercises that use DCEs, best-worst scaling and ranking tasks. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  14. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    Science.gov (United States)

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem

    Directory of Open Access Journals (Sweden)

    V. Charles

    2011-01-01

    Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.

  16. Occlusion removal method of partially occluded object using variance in computational integral imaging

    Science.gov (United States)

    Lee, Byung-Gook; Kang, Ho-Hyun; Kim, Eun-Soo

    2010-06-01

    Computational integral imaging is a promising technique in partially occluded 3D object recognition. With elemental images (EIs) of partially occluded 3D object, the plane image of 3D object to be interested was reconstructed at the location where 3D object was originally located using a computational integral imaging reconstruction (CIIR) algorithm. However, occlusion prevents the high-resolution reconstructed image due to superimposing its defocusing and blurred image at the same time. To overcome this problem, in this paper, we propose a novel occlusion removal method of partially occluded 3D object in computational integral imaging. In the proposed method, we use the variance of ray intensity distribution emitting from EIs and then a series of variance plane images from the EIs is used to estimate the area and distance of occlusion since the intensity variance of focused plane image for occlusion is the lowest at the occlusion location. On the basis of the extracted information, occlusion in the EIs is simply eliminated. Then, the plane images are reconstructed with the CIIR algorithm for the occlusion removed EIs, in which we can obtain the improved high-resolution plane image. To show the feasibility of our proposed scheme, some experiments were carried out and its results are presented as well. [Figure not available: see fulltext.

  17. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    Science.gov (United States)

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).

  18. Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Malvagi, F.

    2012-01-01

    The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solution is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)

  19. Relative turbulent transport efficiency and flux-variance relationships of temperature and water vapor

    Science.gov (United States)

    Hsieh, C. I.

    2016-12-01

    This study investigated the relative transport efficiency and flux-variance relationships of temperature and water vapor, and examined the performance of using this method for predicting sensible heat (H) and water vapor (LE) fluxes with eddy-covariance measured flux data at three different ecosystems: grassland, paddy rice field, and forest.The H and LE estimations were found to be in good agreement with the measurements over the three fields. The prediction accuracy of LE could be improved by around 15% if the predictions were obtained by the flux-variance method in conjunction with measured sensible heat fluxes. Moreover, the paddy rice field was found to be a special case where water vapor follows flux-variance relation better than heat does. The flux budget equations of heat and water vapor were applied to explain this phenomenon. Our results also showed that heat and water vapor were transported with the same efficiency above the grassland and rice paddy. For the forest, heat was transported 20% more efficiently than evapotranspiration.

  20. On the mean and variance of the writhe of random polygons.

    Science.gov (United States)

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  1. Using regression heteroscedasticity to model trends in the mean and variance of floods

    Science.gov (United States)

    Hecht, Jory; Vogel, Richard

    2015-04-01

    Changes in the frequency of extreme floods have been observed and anticipated in many hydrological settings in response to numerous drivers of environmental change, including climate, land cover, and infrastructure. To help decision-makers design flood control infrastructure in settings with non-stationary hydrological regimes, a parsimonious approach for detecting and modeling trends in extreme floods is needed. An approach using ordinary least squares (OLS) to fit a heteroscedastic regression model can accommodate nonstationarity in both the mean and variance of flood series while simultaneously offering a means of (i) analytically evaluating type I and type II trend detection errors, (ii) analytically generating expressions of uncertainty, such as confidence and prediction intervals, (iii) providing updated estimates of the frequency of floods exceeding the flood of record, (iv) accommodating a wide range of non-linear functions through ladder of powers transformations, and (v) communicating hydrological changes in a single graphical image. Previous research has shown that the two-parameter lognormal distribution can adequately model the annual maximum flood distribution of both stationary and non-stationary hydrological regimes in many regions of the United States. A simple logarithmic transformation of annual maximum flood series enables an OLS heteroscedastic regression modeling approach to be especially suitable for creating a non-stationary flood frequency distribution with parameters that are conditional upon time or physically meaningful covariates. While heteroscedasticity is often viewed as an impediment, we document how detecting and modeling heteroscedasticity presents an opportunity for characterizing both the conditional mean and variance of annual maximum floods. We introduce an approach through which variance trend models can be analytically derived from the behavior of residuals of the conditional mean flood model. Through case studies of

  2. An R package "VariABEL" for genome-wide searching of potentially interacting loci by testing genotypic variance heterogeneity

    Directory of Open Access Journals (Sweden)

    Struchalin Maksim V

    2012-01-01

    Full Text Available Abstract Background Hundreds of new loci have been discovered by genome-wide association studies of human traits. These studies mostly focused on associations between single locus and a trait. Interactions between genes and between genes and environmental factors are of interest as they can improve our understanding of the genetic background underlying complex traits. Genome-wide testing of complex genetic models is a computationally demanding task. Moreover, testing of such models leads to multiple comparison problems that reduce the probability of new findings. Assuming that the genetic model underlying a complex trait can include hundreds of genes and environmental factors, testing of these models in genome-wide association studies represent substantial difficulties. We and Pare with colleagues (2010 developed a method allowing to overcome such difficulties. The method is based on the fact that loci which are involved in interactions can show genotypic variance heterogeneity of a trait. Genome-wide testing of such heterogeneity can be a fast scanning approach which can point to the interacting genetic variants. Results In this work we present a new method, SVLM, allowing for variance heterogeneity analysis of imputed genetic variation. Type I error and power of this test are investigated and contracted with these of the Levene's test. We also present an R package, VariABEL, implementing existing and newly developed tests. Conclusions Variance heterogeneity analysis is a promising method for detection of potentially interacting loci. New method and software package developed in this work will facilitate such analysis in genome-wide context.

  3. Population structure and morphometric variance of the Apis mellifera scutellata group of honeybees in Africa

    Directory of Open Access Journals (Sweden)

    Sarah Radloff

    2000-06-01

    Full Text Available The honeybee populations of Africa classified as Apis mellifera scutellata Lepeletier were analysed morphometrically using multivariate statistical techniques. The collection consisted of nearly 15,000 worker honeybees from 825 individual colonies at 193 localities in east Africa, extending from South Africa to Ethiopia. Factor analysis established one primary cluster, designated as A. m. scutellata. Morphocluster formation and inclusivity (correct classification are highly sensitive to sampling distance intervals. Within the A. m. scutellata region are larger bees associated with high altitudes of mountain systems which are traditionally classified as A. m. monticola Smith, but it is evident that these bees do not form a uniform group. Variance characteristics of the morphometric measurements show domains of significantly different local populations. These high variance populations mostly occur at transitional edges of major climatic and vegetational zones, and sometimes with more localised discontinuities in temperature. It is also now evident that those A. m. scutellata introduced nearly fifty years ago into the Neotropics were a particularly homogenous sample, which exhibited all the traits expected in a founder effect or bottleneck population.Populações africanas de abelhas comuns classificadas como Apis mellifera scutellata Lepeletier foram analisadas morfometricamente usando-se técnicas estatísticas multivariadas. A população consistia de aproximadamente 15.000 abelhas operárias provenientes de 825 colônias individuais de 193 localidades do leste da África, estendendo-se da África do Sul até a Etiópia. A análise de fatores estabeleceu um agrupamento primário designado A. m. scutellata. A formação de agrupamento morfológico e a inclusividade (classificação correta são altamente sensíveis aos intervalos de distância da amostragem. Dentro da região de A. m. scutellata há abelhas maiores associadas às altas altitudes

  4. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes

    International Nuclear Information System (INIS)

    Li Qiang; Doi Kunio

    2006-01-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes

  5. Antibiotic exposure and interpersonal variance mask the effect of ivacaftor on respiratory microbiota composition.

    Science.gov (United States)

    Peleg, Anton Y; Choo, Jocelyn M; Langan, Katherine M; Edgeworth, Deirdre; Keating, Dominic; Wilson, John; Rogers, Geraint B; Kotsimbos, Tom

    2018-01-01

    G551D is a class III mutation of the cystic fibrosis transmembrane regulator (CFTR) that results in impaired chloride channel function in cystic fibrosis (CF). Ivacaftor, a CFTR-potentiating agent improves sweat chloride, weight, lung function, and pulmonary exacerbation rate in CF patients with G551D mutations, but its effect on the airway microbiome remains poorly characterised. Twenty CF patients with at least one G551D mutation from a single centre were recruited to a 4month double-blind, placebo-controlled, crossover study of ivacaftor with 28days of active treatment. Sputum microbiota composition was assessed by 16S rRNA gene amplicon sequencing and quantitative PCR at five key time points, along with regular clinical review, respiratory function assessment, and peripheral blood testing. No significant difference in microbiota composition was observed in subjects following ivacaftor treatment or placebo (PERMANOVA P=0.95, square root ECV=-4.94, 9479 permutations). Microbiota composition variance was significantly greater between subjects, than within subjects over time (Pmicrobiota similarity was therefore performed. Again, change in microbiota composition was not significantly greater during treatment with ivacaftor compared to placebo (Wilcoxon test, P=0.51). A significant change in microbiota composition was however associated with any change in antibiotic exposure, regardless of whether ivacaftor or placebo was administered (P=0.006). In a small, subgroup analysis of subjects whose antibiotic exposure did not change within the study period, a significant reduction in total bacterial load was observed during treatment with ivacaftor (P=0.004, two-tailed paired Student's t-test). The short-term impact of ivacaftor therapy on sputum microbiota composition in patients with G551D mutations are modest compared to those resulting from antibiotic exposure, and may be masked by changes in antibiotic treatment regimen. Copyright © 2017 European Cystic Fibrosis

  6. Quantitative milk genomics: estimation of variance components and prediction of fatty acids in bovine milk

    DEFF Research Database (Denmark)

    Krag, Kristian

    The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...... be estimated from SNP markers, with a performance similar to traditional pedigree approaches. The heritability and correlation estimates indicate, that the composition of saturated FA and unsaturated FA can be altered independently, though selection and regulations in feeding rgimes. For the prediction FA...

  7. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars Peter; Janss, Luc; Madsen, Per

    2012-01-01

    traits such as mammary disease traits in dairy cattle. METHODS: Data on progeny means of six traits related to mastitis resistance in dairy cattle (general mastitis resistance and five pathogen-specific mastitis resistance traits) were analyzed using a bivariate Bayesian SNP-based genomic model......, per chromosome, and in regions of 100 SNP on a chromosome. RESULTS: Genomic proportions of the total variance differed between traits. Genomic correlations were lower than pedigree-based genetic correlations and they were highest between general mastitis and pathogen-specific traits because...

  8. In Search of a Theory: The Interpretative Challenge of Empirical Findings on Cultural Variance in Mindreading

    Directory of Open Access Journals (Sweden)

    Gut Arkadiusz

    2016-12-01

    Full Text Available In this paper, we present a battery of empirical findings on the relationship between cultural context and theory of mind that show great variance in the onset and character of mindreading in different cultures; discuss problems that those findings cause for the largely-nativistic outlook on mindreading dominating in the literature; and point to an alternative framework that appears to better accommodate the evident cross-cultural variance in mindreading. We first outline the theoretical frameworks that dominate in mindreading research, then present the relevant empirical findings, and finally we come back to the theoretical approaches in a discussion of their explanatory potential in the face of the data presented. The theoretical frameworks discussed are the two-systems approach; performance-based approach also known as modularity-nativist approach; and the social-communicative theory also known as the systems, relational-systems, dynamic systems and developmental systems theory. The former two, which both fall within the wider modular-computational paradigm, run into a challenge with the cross-cultural data presented, and the latter - the systemic framework - seems to offer an explanatorily potent alternative. The empirical data cited in this paper comes from research on cross-cultural differences in folk psychology and theory-of-mind development; the influence of parenting practices on the development of theory of mind; the development and character of theory of mind in deaf populations; and neuroimaging research of cultural differences in mindreading.

  9. Use of an excess variance approach for the certification of reference materials by interlaboratory comparison

    International Nuclear Information System (INIS)

    Crozet, M.; Rigaux, C.; Roudil, D.; Tuffery, B.; Ruas, A.; Desenfant, M.

    2014-01-01

    In the nuclear field, the accuracy and comparability of analytical results are crucial to insure correct accountancy, good process control and safe operational conditions. All of these require reliable measurements based on reference materials whose certified values must be obtained by robust metrological approaches according to the requirements of ISO guides 34 and 35. The data processing of the characterization step is one of the key steps of a reference material production process. Among several methods, the use of interlaboratory comparison results for reference material certification is very common. The DerSimonian and Laird excess variance approach, described and implemented in this paper, is a simple and efficient method for the data processing of interlaboratory comparison results for reference material certification. By taking into account not only the laboratory uncertainties but also the spread of the individual results into the calculation of the weighted mean, this approach minimizes the risk to get biased certified values in the case where one or several laboratories either underestimate their measurement uncertainties or do not identify all measurement biases. This statistical method has been applied to a new CETAMA plutonium reference material certified by interlaboratory comparison and has been compared to the classical weighted mean approach described in ISO Guide 35. This paper shows the benefits of using an 'excess variance' approach for the certification of reference material by interlaboratory comparison. (authors)

  10. Sex chromosome linked genetic variance and the evolution of sexual dimorphism of quantitative traits.

    Science.gov (United States)

    Husby, Arild; Schielzeth, Holger; Forstmeier, Wolfgang; Gustafsson, Lars; Qvarnström, Anna

    2013-03-01

    Theory predicts that sex chromsome linkage should reduce intersexual genetic correlations thereby allowing the evolution of sexual dimorphism. Empirical evidence for sex linkage has come largely from crosses and few studies have examined how sexual dimorphism and sex linkage are related within outbred populations. Here, we use data on an array of different traits measured on over 10,000 individuals from two pedigreed populations of birds (collared flycatcher and zebra finch) to estimate the amount of sex-linked genetic variance (h(2)z ). Of 17 traits examined, eight showed a nonzero h(2)Z estimate but only four were significantly different from zero (wing patch size and tarsus length in collared flycatchers, wing length and beak color in zebra finches). We further tested how sexual dimorphism and the mode of selection operating on the trait relate to the proportion of sex-linked genetic variance. Sexually selected traits did not show higher h(2)Z than morphological traits and there was only a weak positive relationship between h(2)Z and sexual dimorphism. However, given the relative scarcity of empirical studies, it is premature to make conclusions about the role of sex chromosome linkage in the evolution of sexual dimorphism. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  11. The Relation of Hand and Arm Configuration Variances while Tracking Geometric Figures in Parkinson's Disease: Aspects for Rehabilitation

    Science.gov (United States)

    Keresztenyi, Zoltan; Cesari, Paola; Fazekas, Gabor; Laczko, Jozsef

    2009-01-01

    Variances of drawing arm movements between patients with Parkinson's disease and healthy controls were compared. The aim was to determine whether differences in joint synergies or individual joint rotations affect the endpoint (hand position) variance. Joint and endpoint coordinates were measured while participants performed drawing tasks.…

  12. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  13. The variance of sodium current fluctuations at the node of Ranvier.

    Science.gov (United States)

    Sigworth, F J

    1980-10-01

    1. Single myelinated nerve fibres 12-17 mum in diameter from Rana temporaria and Rana pipiens were voltage clamped at 2-5 degrees C. Potassium currents were blocked by internal Cs(+) and external tetraethylammonium ion. Series resistance compensation was employed.2. Sets of 80-512 identical, 20 ms depolarizations were applied, with the pulses repeated at intervals of 300-600 ms. The resulting membrane current records, filtered at 5 kHz, showed record-to-record variations of the current on the order of 1%. From each set of records the time course of the mean current and the time course of the variance were calculated.3. The variance was assumed to arise primarily from two independent sources of current fluctuations: the stochastic gating of sodium channels and the thermal noise background in the voltage clamp. Measurement of the passive properties of the nerve preparation allowed the thermal noise variance to be estimated, and these estimates accounted for the variance observed in the presence of tetrodotoxin and at the reversal potential.4. After the variance sigma(2) was corrected for the contribution from the background, its relationship to the mean current I could be fitted by the function sigma(2) = iI-I(2)/N expected for N independent channels having one non-zero conductance level. The single channel currents i corresponded to a single-channel chord conductance gamma = 6.4 +/- 0.9 pS (S.D.; n = 14). No significant difference in gamma was observed between the two species of frogs. The size of the total population of channels ranged from 20,000 to 46,000.5. The voltage dependence of i corresponded closely to the form of the instantaneous current-voltage relationship of the sodium conductance, except at the smallest depolarizations. The small values of i at small depolarizations may have resulted from the filtering of high-frequency components of the fluctuations.6. It is concluded that sodium channels have only two primary levels of conductance, corresponding to

  14. Thermal infrared sounding observations of lower atmospheric variances at Mars and their implications for gravity wave activity: a preliminary examination

    Science.gov (United States)

    Heavens, N. G.

    2017-12-01

    It has been recognized for over two decades that the mesoscale statistical variance observed by Earth-observing satellites at temperature-sensitive frequencies above the instrumental noise floor is a measure of gravity wave activity. These types of observation have been made by a variety of satellite instruments have been an important validation tool for gravity wave parameterizations in global and mesoscale models. At Mars, the importance of topographic and non-topographic sources of gravity waves for the general circulation is now widely recognized and the target of recent modeling efforts. However, despite several ingenious studies, gravity wave activity near hypothetical lower atmospheric sources has been poorly and unsystematically characterized, partly because of the difficulty of separating the gravity wave activity from baroclinic wave activity and the thermal tides. Here will be presented a preliminary analysis of calibrated radiance variance at 15.4 microns (635-665 cm-1) from nadir, off-nadir, and limb observations by the Mars Climate Sounder on board Mars Reconnaissance Orbiter. The overarching methodology follows Wu and Waters (1996, 1997). Nadir, off-nadir, and lowest detector limb observations should sample variability with vertical weighting functions centered high in the lower atmosphere (20-30 km altitude) and full width half maximum (FWHM) 20 km but be sensitive to gravity waves with different horizontal wavelengths and slightly different vertical wavelengths. This work is supported by NASA's Mars Data Analysis Program (NNX14AM32G). References Wu, D.L. and J.W. Waters, 1996, Satellite observations of atmospheric variances: A possible indication of gravity waves, GRL, 23, 3631-3634. Wu D.L. and J.W. Waters, 1997, Observations of Gravity Waves with the UARS Microwave Limb Sounder. In: Hamilton K. (eds) Gravity Wave Processes. NATO ASI Series (Series I: Environmental Change), vol 50. Springer, Berlin, Heidelberg.

  15. The modified Mann-Kendall test: on the performance of three variance correction approaches

    OpenAIRE

    Blain,Gabriel Constantino

    2013-01-01

    The Mann-Kendall test has been used to detect climate trends in several parts of the Globe. Three variance correction approaches (MKD, MKDD and MKRD) have been proposed to remove the influence of serial correlation on this trend test. Thus, the main goal of this study was to evaluate the probability of occurrence of types I and II errors associated with these three approaches. The results obtained by means of Monte Carlo simulations and from a case of study allowed us to drawn the following c...

  16. Daily Goals Formulation and Enhanced Visualization of Mechanical Ventilation Variance Improves Mechanical Ventilation Score.

    Science.gov (United States)

    Walsh, Brian K; Smallwood, Craig; Rettig, Jordan; Kacmarek, Robert M; Thompson, John; Arnold, John H

    2017-03-01

    The systematic implementation of evidence-based practice through the use of guidelines, checklists, and protocols mitigates the risks associated with mechanical ventilation, yet variation in practice remains prevalent. Recent advances in software and hardware have allowed for the development and deployment of an enhanced visualization tool that identifies mechanical ventilation goal variance. Our aim was to assess the utility of daily goal establishment and a computer-aided visualization of variance. This study was composed of 3 phases: a retrospective observational phase (baseline) followed by 2 prospective sequential interventions. Phase I intervention comprised daily goal establishment of mechanical ventilation. Phase II intervention was the setting and monitoring of daily goals of mechanical ventilation with a web-based data visualization system (T3). A single score of mechanical ventilation was developed to evaluate the outcome. The baseline phase evaluated 130 subjects, phase I enrolled 31 subjects, and phase II enrolled 36 subjects. There were no differences in demographic characteristics between cohorts. A total of 171 verbalizations of goals of mechanical ventilation were completed in phase I. The use of T3 increased by 87% from phase I. Mechanical ventilation score improved by 8.4% in phase I and 11.3% in phase II from baseline ( P = .032). The largest effect was in the low risk V T category, with a 40.3% improvement from baseline in phase I, which was maintained at 39% improvement from baseline in phase II ( P = .01). mechanical ventilation score was 9% higher on average in those who survived. Daily goal formation and computer-enhanced visualization of mechanical ventilation variance were associated with an improvement in goal attainment by evidence of an improved mechanical ventilation score. Further research is needed to determine whether improvements in mechanical ventilation score through a targeted, process-oriented intervention will lead to

  17. GARCH based artificial neural networks in forecasting conditional variance of stock returns

    Directory of Open Access Journals (Sweden)

    Josip Arnerić

    2014-12-01

    Full Text Available Portfolio managers, option traders and market makers are all interested in volatility forecasting in order to get higher profits or less risky positions. Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most popular models in modelling volatility are GARCH type models because they can account excess kurtosis and asymmetric effects of financial time series. A standard GARCH(1,1 model usually indicates high persistence in the conditional variance, which may originate from structural changes. The first objective of this paper is to develop a parsimonious neural networks (NN model, which can capture the nonlinear relationship between past return innovations and conditional variance. Therefore, the goal is to develop a neural network with an appropriate recurrent connection in the context of nonlinear ARMA models, i.e., the Jordan neural network (JNN. The second objective of this paper is to determine if JNN outperforms the standard GARCH model. Out-of-sample forecasts of the JNN and the GARCH model will be compared to determine their predictive accuracy. The data set consists of returns of the CROBEX index daily closing prices obtained from the Zagreb Stock Exchange. The results indicate that the selected JNN(1,1,1 model has superior performances compared to the standard GARCH(1,1 model. The contribution of this paper can be seen in determining the appropriate NN that is comparable to the standard GARCH(1,1 model and its application in forecasting conditional variance of stock returns. Moreover, from the econometric perspective, NN models are used as a semi-parametric method that combines flexibility of nonparametric methods and the interpretability of parameters of parametric methods.

  18. Effect of sequence variants on variance in glucose levels predicts type 2 diabetes risk and accounts for heritability.

    Science.gov (United States)

    Ivarsdottir, Erna V; Steinthorsdottir, Valgerdur; Daneshpour, Maryam S; Thorleifsson, Gudmar; Sulem, Patrick; Holm, Hilma; Sigurdsson, Snaevar; Hreidarsson, Astradur B; Sigurdsson, Gunnar; Bjarnason, Ragnar; Thorsson, Arni V; Benediktsson, Rafn; Eyjolfsson, Gudmundur; Sigurdardottir, Olof; Olafsson, Isleifur; Zeinali, Sirous; Azizi, Fereidoun; Thorsteinsdottir, Unnur; Gudbjartsson, Daniel F; Stefansson, Kari

    2017-09-01

    Sequence variants that affect mean fasting glucose levels do not necessarily affect risk for type 2 diabetes (T2D). We assessed the effects of 36 reported glucose-associated sequence variants on between- and within-subject variance in fasting glucose levels in 69,142 Icelanders. The variant in TCF7L2 that increases fasting glucose levels increases between-subject variance (5.7% per allele, P = 4.2 × 10 -10 ), whereas variants in GCK and G6PC2 that increase fasting glucose levels decrease between-subject variance (7.5% per allele, P = 4.9 × 10 -11 and 7.3% per allele, P = 7.5 × 10 -18 , respectively). Variants that increase mean and between-subject variance in fasting glucose levels tend to increase T2D risk, whereas those that increase the mean but reduce variance do not (r 2 = 0.61). The variants that increase between-subject variance increase fasting glucose heritability estimates. Intuitively, our results show that increasing the mean and variance of glucose levels is more likely to cause pathologically high glucose levels than increase in the mean offset by a decrease in variance.

  19. Restriction of Variance Interaction Effects and Their Importance for International Business Research

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Nielsen, Bo Bernhard

    2015-01-01

    A recent Journal of International Business Studies editorial on interaction effects within and across levels highlighted the importance of and difficulty associated with justifying and reporting of such interaction effects. The purpose of this editorial is to describe a type of interaction...... hypothesis that is very common in international business (IB) research: the restricted variance (RV) hypothesis. Specifically, we describe the nature of an RV interaction and its evidentiary requirements. We also offer several IB examples involving interactions that could have been supported with RV...

  20. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  1. Modelling Changes in the Unconditional Variance of Long Stock Return Series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all......In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011...

  2. Modelling changes in the unconditional variance of long stock return series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    2014-01-01

    that the apparent long memory property in volatility may be interpreted as changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecasting accuracy of the new model over the GJR-GARCH model at all horizons for eight......In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long daily return series. For this purpose we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta...

  3. Effect of captivity on genetic variance for five traits in the large milkweed bug (Oncopeltus fasciatus).

    Science.gov (United States)

    Rodríguez-Clark, K M

    2004-07-01

    Understanding the changes in genetic variance which may occur as populations move from nature into captivity has been considered important when populations in captivity are used as models of wild ones. However, the inherent significance of these changes has not previously been appreciated in a conservation context: are the methods aimed at founding captive populations with gene diversity representative of natural populations likely also to capture representative quantitative genetic variation? Here, I investigate changes in heritability and a less traditional measure, evolvability, between nature and captivity for the large milkweed bug, Oncopeltus fasciatus, to address this question. Founders were collected from a 100-km transect across the north-eastern US, and five traits (wing colour, pronotum colour, wing length, early fecundity and later fecundity) were recorded for founders and for their offspring during two generations in captivity. Analyses reveal significant heritable variation for some life history and morphological traits in both environments, with comparable absolute levels of evolvability across all traits (0-30%). Randomization tests show that while changes in heritability and total phenotypic variance were highly variable, additive genetic variance and evolvability remained stable across the environmental transition in the three morphological traits (changing 1-2% or less), while they declined significantly in the two life-history traits (5-8%). Although it is unclear whether the declines were due to selection or gene-by-environment interactions (or both), such declines do not appear inevitable: captive populations with small numbers of founders may contain substantial amounts of the evolvability found in nature, at least for some traits.

  4. Investigation of Allan variance for determining noise spectral forms with application to microwave radiometry

    Science.gov (United States)

    Stanley, William D.

    1994-01-01

    An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.

  5. Normalised Degree Variance

    OpenAIRE

    Smith, Keith; Escudero, Javier

    2018-01-01

    Finding graph indices which are unbiased to network size is of high importance both within a given field and across fields for enhancing the comparability over the cornucopia of modern network science studies as well as in subnetwork comparisons of the same network. The degree variance is an important metric for characterising graph heterogeneity and hub dominance, however this clearly depends on the largest and smallest degrees of the graph which depends on network size. Here, we provide an ...

  6. Intraoperative detection of 18F-FDG-avid tissue sites using the increased probe counting efficiency of the K-alpha probe design and variance-based statistical analysis with the three-sigma criteria

    International Nuclear Information System (INIS)

    Povoski, Stephen P; Chapman, Gregg J; Murrey, Douglas A; Lee, Robert; Martin, Edward W; Hall, Nathan C

    2013-01-01

    Intraoperative detection of 18 F-FDG-avid tissue sites during 18 F-FDG-directed surgery can be very challenging when utilizing gamma detection probes that rely on a fixed target-to-background (T/B) ratio (ratiometric threshold) for determination of probe positivity. The purpose of our study was to evaluate the counting efficiency and the success rate of in situ intraoperative detection of 18 F-FDG-avid tissue sites (using the three-sigma statistical threshold criteria method and the ratiometric threshold criteria method) for three different gamma detection probe systems. Of 58 patients undergoing 18 F-FDG-directed surgery for known or suspected malignancy using gamma detection probes, we identified nine 18 F-FDG-avid tissue sites (from amongst seven patients) that were seen on same-day preoperative diagnostic PET/CT imaging, and for which each 18 F-FDG-avid tissue site underwent attempted in situ intraoperative detection concurrently using three gamma detection probe systems (K-alpha probe, and two commercially-available PET-probe systems), and then were subsequently surgical excised. The mean relative probe counting efficiency ratio was 6.9 (± 4.4, range 2.2–15.4) for the K-alpha probe, as compared to 1.5 (± 0.3, range 1.0–2.1) and 1.0 (± 0, range 1.0–1.0), respectively, for two commercially-available PET-probe systems (P < 0.001). Successful in situ intraoperative detection of 18 F-FDG-avid tissue sites was more frequently accomplished with each of the three gamma detection probes tested by using the three-sigma statistical threshold criteria method than by using the ratiometric threshold criteria method, specifically with the three-sigma statistical threshold criteria method being significantly better than the ratiometric threshold criteria method for determining probe positivity for the K-alpha probe (P = 0.05). Our results suggest that the improved probe counting efficiency of the K-alpha probe design used in conjunction with the three

  7. Intraoperative detection of ¹⁸F-FDG-avid tissue sites using the increased probe counting efficiency of the K-alpha probe design and variance-based statistical analysis with the three-sigma criteria.

    Science.gov (United States)

    Povoski, Stephen P; Chapman, Gregg J; Murrey, Douglas A; Lee, Robert; Martin, Edward W; Hall, Nathan C

    2013-03-04

    Intraoperative detection of (18)F-FDG-avid tissue sites during 18F-FDG-directed surgery can be very challenging when utilizing gamma detection probes that rely on a fixed target-to-background (T/B) ratio (ratiometric threshold) for determination of probe positivity. The purpose of our study was to evaluate the counting efficiency and the success rate of in situ intraoperative detection of (18)F-FDG-avid tissue sites (using the three-sigma statistical threshold criteria method and the ratiometric threshold criteria method) for three different gamma detection probe systems. Of 58 patients undergoing (18)F-FDG-directed surgery for known or suspected malignancy using gamma detection probes, we identified nine (18)F-FDG-avid tissue sites (from amongst seven patients) that were seen on same-day preoperative diagnostic PET/CT imaging, and for which each (18)F-FDG-avid tissue site underwent attempted in situ intraoperative detection concurrently using three gamma detection probe systems (K-alpha probe, and two commercially-available PET-probe systems), and then were subsequently surgical excised. The mean relative probe counting efficiency ratio was 6.9 (± 4.4, range 2.2-15.4) for the K-alpha probe, as compared to 1.5 (± 0.3, range 1.0-2.1) and 1.0 (± 0, range 1.0-1.0), respectively, for two commercially-available PET-probe systems (P < 0.001). Successful in situ intraoperative detection of 18F-FDG-avid tissue sites was more frequently accomplished with each of the three gamma detection probes tested by using the three-sigma statistical threshold criteria method than by using the ratiometric threshold criteria method, specifically with the three-sigma statistical threshold criteria method being significantly better than the ratiometric threshold criteria method for determining probe positivity for the K-alpha probe (P = 0.05). Our results suggest that the improved probe counting efficiency of the K-alpha probe design used in conjunction with the three-sigma statistical

  8. Heterogeneous network epidemics: real-time growth, variance and extinction of infection.

    Science.gov (United States)

    Ball, Frank; House, Thomas

    2017-09-01

    Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.

  9. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  10. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    Science.gov (United States)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  11. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... to the larger malignant nuclei. Finally, the variance in the volume distribution of nuclear volume is estimated by shape-independent estimates of the volume-weighted second moment of the nuclear volume, vv2, using both a manual and a computer-assisted approach. The working procedure for the description of 3-D...

  12. The effect of PSF spatial-variance and nonlinear transducer geometry on motion estimation from echocardiography

    Science.gov (United States)

    Tavakoli, Vahid; Amini, Amir A.

    2011-03-01

    Two-dimensional echocardiography continues to be the most widely used modality for the assessment of cardiac function due to its effectiveness, ease of use, and low costs. Echocardiographic images are derived from the mechanical interaction between the ultrasound field and the contractile heart tissue. Previously, in [6], based on B-mode echocardiographic simulations, we showed that motion estimation errors are significantly higher in shift-varying simulations when compared to shift-invariant simulations. In order to ascertain the effect of the spatial variance of the Ultrasonic field point spread function (PSF) and the transducer geometry on motion estimation, in the current paper, several simple canonical cardiac motions such as translation in axial and horizontal direction, and out-of-plane motion were simulated and the motion estimation errors were calculated. For axial motions, the greatest angular errors occurred within the lateral regions of the image, irrespective of the motion estimation technique that was adopted. We hypothesize that the transducer geometry and the PSF spatial-variance were the underlying sources of error for the motion estimation methods. No similar conclusions could be made regarding motion estimation errors for azimuthal and out-of-plane ultrasound simulations.

  13. Stable Control of Firing Rate Mean and Variance by Dual Homeostatic Mechanisms.

    Science.gov (United States)

    Cannon, Jonathan; Miller, Paul

    2017-12-01

    Homeostatic processes that provide negative feedback to regulate neuronal firing rates are essential for normal brain function. Indeed, multiple parameters of individual neurons, including the scale of afferent synapse strengths and the densities of specific ion channels, have been observed to change on homeostatic time scales to oppose the effects of chronic changes in synaptic input. This raises the question of whether these processes are controlled by a single slow feedback variable or multiple slow variables. A single homeostatic process providing negative feedback to a neuron's firing rate naturally maintains a stable homeostatic equilibrium with a characteristic mean firing rate; but the conditions under which multiple slow feedbacks produce a stable homeostatic equilibrium have not yet been explored. Here we study a highly general model of homeostatic firing rate control in which two slow variables provide negative feedback to drive a firing rate toward two different target rates. Using dynamical systems techniques, we show that such a control system can be used to stably maintain a neuron's characteristic firing rate mean and variance in the face of perturbations, and we derive conditions under which this happens. We also derive expressions that clarify the relationship between the homeostatic firing rate targets and the resulting stable firing rate mean and variance. We provide specific examples of neuronal systems that can be effectively regulated by dual homeostasis. One of these examples is a recurrent excitatory network, which a dual feedback system can robustly tune to serve as an integrator.

  14. A note on the misuses of the variance test in meteorological studies

    Science.gov (United States)

    Hazra, Arnab; Bhattacharya, Sourabh; Banik, Pabitra; Bhattacharya, Sabyasachi

    2017-12-01

    Stochastic modeling of rainfall data is an important area in meteorology. The gamma distribution is a widely used probability model for non-zero rainfall. Typically the choice of the distribution for such meteorological studies is based on two goodness-of-fit tests—the Pearson's Chi-square test and the Kolmogorov-Smirnov test. Inspired by the index of dispersion introduced by Fisher (Statistical methods for research workers. Hafner Publishing Company Inc., New York, 1925), Mooley (Mon Weather Rev 101:160-176, 1973) proposed the variance test as a goodness-of-fit measure in this context and a number of researchers have implemented it since then. We show that the asymptotic distribution of the test statistic for the variance test is generally not comparable to any central Chi-square distribution and hence the test is erroneous. We also describe a method for checking the validity of the asymptotic distribution for a class of distributions. We implement the erroneous test on some simulated, as well as real datasets and demonstrate how it leads to some wrong conclusions.

  15. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    Science.gov (United States)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  16. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  17. Model determination in a case of heterogeneity of variance using sampling techniques.

    Science.gov (United States)

    Varona, L; Moreno, C; Garcia-Cortes, L A; Altarriba, J

    1997-01-12

    A sampling determination procedure has been described in a case of heterogeneity of variance. The procedure makes use of the predictive distributions of each data given the rest of the data and the structure of the assumed model. The computation of these predictive distributions is carried out using a Gibbs Sampling procedure. The final criterion to compare between models is the Mean Square Error between the expectation of predictive distributions and real data. The procedure has been applied to a data set of weight at 210 days in the Spanish Pirenaica beef cattle breed. Three proposed models have been compared: (a) Single Trait Animal Model; (b) Heterogeneous Variance Animal Model; and (c) Multiple Trait Animal Model. After applying the procedure, the most adjusted model was the Heterogeneous Variance Animal Model. This result is probably due to a compromise between the complexity of the model and the amount of available information. The estimated heritabilities under the preferred model have been 0.489 ± 0.076 for males and 0.331 ± 0.082 for females. RESUMEN: Contraste de modelos en un caso de heterogeneidad de varianzas usando métodos de muestreo Se ha descrito un método de contraste de modelos mediante técnicas de muestreo en un caso de heterogeneidad de varianza entre sexos. El procedimiento utiliza las distribucviones predictivas de cada dato, dado el resto de datos y la estructura del modelo. El criterio para coparar modelos es el error cuadrático medio entre la esperanza de las distribuciones predictivas y los datos reales. El procedimiento se ha aplicado en datos de peso a los 210 días en la raza bovina Pirenaica. Se han propuesto tres posibles modelos: (a) Modelo Animal Unicaracter; (b) Modelo Animal con Varianzas Heterogéneas; (c) Modelo Animal Multicaracter. El modelo mejor ajustado fue el Modelo Animal con Varianzas Heterogéneas. Este resultado es probablemente debido a un compromiso entre la complejidad del modelo y la cantidad de datos

  18. Detecting parent of origin and dominant QTL in a two-generation commercial poultry pedigree using variance component methodology

    Directory of Open Access Journals (Sweden)

    Haley Christopher S

    2009-01-01

    Full Text Available Abstract Introduction Variance component QTL methodology was used to analyse three candidate regions on chicken chromosomes 1, 4 and 5 for dominant and parent-of-origin QTL effects. Data were available for bodyweight and conformation score measured at 40 days from a two-generation commercial broiler dam line. One hundred dams were nested in 46 sires with phenotypes and genotypes on 2708 offspring. Linear models were constructed to simultaneously estimate fixed, polygenic and QTL effects. Different genetic models were compared using likelihood ratio test statistics derived from the comparison of full with reduced or null models. Empirical thresholds were derived by permutation analysis. Results Dominant QTL were found for bodyweight on chicken chromosome 4 and for bodyweight and conformation score on chicken chromosome 5. Suggestive evidence for a maternally expressed QTL for bodyweight and conformation score was found on chromosome 1 in a region corresponding to orthologous imprinted regions in the human and mouse. Conclusion Initial results suggest that variance component analysis can be applied within commercial populations for the direct detection of segregating dominant and parent of origin effects.

  19. Spatio-temporal variance and meteorological drivers of the urban heat island in a European city

    Science.gov (United States)

    Arnds, Daniela; Böhner, Jürgen; Bechtel, Benjamin

    2017-04-01

    Urban areas are especially vulnerable to high temperatures, which will intensify in the future due to climate change. Therefore, both good knowledge about the local urban climate as well as simple and robust methods for its projection are needed. This study has analysed the spatio-temporal variance of the mean nocturnal urban heat island (UHI) of Hamburg, with observations from 40 stations from different suppliers. The UHI showed a radial gradient with about 2 K in the centre mostly corresponding to the urban densities. Temporarily, it has a strong seasonal cycle with the highest values between April and September and an inter-annual variability of approximately 0.5 K. Further, synoptic meteorological drivers of the UHI were analysed, which generally is most pronounced under calm and cloud-free conditions. Considered were meteorological parameters such as relative humidity, wind speed, cloud cover and objective weather types. For the stations with the highest UHI intensities, up to 68.7 % of the variance could be explained by seasonal empirical models and even up to 76.6 % by monthly models.

  20. Increased genetic variance of BMI with a higher prevalence of obesity.

    Directory of Open Access Journals (Sweden)

    Benjamin Rokholm

    Full Text Available BACKGROUND AND OBJECTIVES: There is no doubt that the dramatic worldwide increase in obesity prevalence is due to changes in environmental factors. However, twin studies suggest that genetic differences are responsible for the major part of the variation in body mass index (BMI and other measures of body fatness within populations. Several recent studies suggest that the genetic effects on adiposity may be stronger when combined with presumed risk factors for obesity. We tested the hypothesis that a higher prevalence of obesity and overweight and a higher BMI mean is associated with a larger genetic variation in BMI. METHODS: The data consisted of self-reported height and weight from two Danish twin surveys in 1994 and 2002. A total of 15,017 monozygotic and dizygotic twin pairs were divided into subgroups by year of birth (from 1931 through 1982 and sex. The genetic and environmental variance components of BMI were calculated for each subgroup using the classical twin design. Likewise, the prevalence of obesity, prevalence of overweight and the mean of the BMI distribution was calculated for each subgroup and tested as explanatory variables in a random effects meta-regression model with the square root of the additive genetic variance (equal to the standard deviation as the dependent variable. RESULTS: The size of additive genetic variation was positively and significantly associated with obesity prevalence (p = 0.001 and the mean of the BMI distribution (p = 0.015. The association with prevalence of overweight was positive but not statistically significant (p = 0.177. CONCLUSION: The results suggest that the genetic variation in BMI increases as the prevalence of obesity, prevalence of overweight and the BMI mean increases. The findings suggest that the genes related to body fatness are expressed more aggressively under the influence of an obesity-promoting environment.