WorldWideScience

Sample records for variance analysis statewide

  1. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  2. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  3. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  4. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  5. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  6. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  7. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  8. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  9. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  10. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  11. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  12. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  13. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    Science.gov (United States)

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533

  14. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  15. Cost-effectiveness analysis of a statewide media campaign to promote adolescent physical activity.

    Science.gov (United States)

    Peterson, Michael; Chandlee, Margaret; Abraham, Avron

    2008-10-01

    A cost-effectiveness analysis of a statewide social marketing campaign was performed using a statewide surveillance survey distributed to 6th through 12th graders, media production and placement costs, and 2000 census data. Exposure to all three advertisements had the highest impact on both intent and behavior with 65.6% of the respondents considering becoming more active and 58.3% reporting becoming more active. Average cost of the entire campaign was $4.01 per person to see an ad, $7.35 per person to consider being more active, and $8.87 per person to actually become more active, with billboards yielding the most positive cost-effectiveness. Findings highlight market research as an essential part of social marketing campaigns and the importance of using multiple marketing modalities to enhance cost-effectiveness and impact.

  16. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  17. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  18. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  19. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  20. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

  1. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  2. Advanced methods of analysis variance on scenarios of nuclear prospective

    International Nuclear Information System (INIS)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-01-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  3. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  4. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  5. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  6. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  7. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  8. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  9. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  10. Preliminary concept for statewide intercity bus and rail transit system : priority corridor ranking and analysis.

    Science.gov (United States)

    2009-03-01

    This product summarizes the preliminary concept and priority corridors for development of a potential : statewide intercity bus and rail network. The concept is based upon the results of Tasks 1 through 5 of Texas : Department of Transportation Proje...

  11. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  12. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  13. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  14. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    Science.gov (United States)

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  15. Analysis of Gastric Lavage Reported to a Statewide Poison Control System.

    Science.gov (United States)

    Donkor, Jimmy; Armenian, Patil; Hartman, Isaac N; Vohra, Rais

    2016-10-01

    As decontamination trends have evolved, gastric lavage (GL) has become a rare procedure. The current information regarding use, outcomes, and complications of GL could help refine indications for this invasive procedure. We sought to determine case type, location, and complications of GL cases reported to a statewide poison control system. This is a retrospective review of the California Poison Control System (CPCS) records from 2009 to 2012. Specific substances ingested, results and complications of GL, referring hospital ZIP codes, and outcomes were examined. Nine hundred twenty-three patients who underwent GL were included in the final analysis, ranging in age from 9 months to 88 years. There were 381 single and 540 multiple substance ingestions, with pill fragment return in 27%. Five hundred thirty-six GLs were performed with CPCS recommendation, while 387 were performed without. Complications were reported for 20 cases. There were 5 deaths, all after multiple ingestions. Among survivors, 37% were released from the emergency department, 13% were admitted to hospital wards, and 48% were admitted to intensive care units. The most commonly ingested substances were nontricyclic antidepressant psychotropics (n = 313), benzodiazepines (n = 233), acetaminophen (n = 191), nonsteroidal anti-inflammatory drugs (n = 107), diphenhydramine (n = 70), tricyclic antidepressants (n = 45), aspirin (n = 45), lithium (n = 36), and antifreeze (n = 10). The geographic distribution was clustered near regions of high population density, with a few exceptions. Toxic agents for which GL was performed reflected a broad spectrum of potential hazards, some of which are not life-threatening or have effective treatments. Continuing emergency physician and poison center staff education is required to assist in patient selection. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A Large-Scale Analysis of Variance in Written Language.

    Science.gov (United States)

    Johns, Brendan T; Jamieson, Randall K

    2018-01-22

    The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers, & Tenenbaum, ; Jones & Mewhort, ; Landauer & Dumais, ; Mikolov, Sutskever, Chen, Corrado, & Dean, ). The models treat knowledge as an interaction of processing mechanisms and the structure of language experience. But language experience is often treated agnostically. We report a distributional semantic analysis that shows written language in fiction books varies appreciably between books from the different genres, books from the same genre, and even books written by the same author. Given that current theories assume that word knowledge reflects an interaction between processing mechanisms and the language environment, the analysis shows the need for the field to engage in a more deliberate consideration and curation of the corpora used in computational studies of natural language processing. Copyright © 2018 Cognitive Science Society, Inc.

  17. Analysis of force variance for a continuous miner drum using the Design of Experiments method

    Energy Technology Data Exchange (ETDEWEB)

    S. Somanchi; V.J. Kecojevic; C.J. Bise [Pennsylvania State University, University Park, PA (United States)

    2006-06-15

    Continuous miners (CMs) are excavating machines designed to extract a variety of minerals by underground mining. The variance in force experienced by the cutting drum is a very important aspect that must be considered during drum design. A uniform variance essentially means that an equal load is applied on the individual cutting bits and this, in turn, enables better cutting action, greater efficiency, and longer bit and machine life. There are certain input parameters used in the drum design whose exact relationships with force variance are not clearly understood. This paper determines (1) the factors that have a significant effect on the force variance of the drum and (2) the values that can be assigned to these factors to minimize the force variance. A computer program, Continuous Miner Drum (CMD), was developed in collaboration with Kennametal, Inc. to facilitate the mechanical design of CM drums. CMD also facilitated data collection for determining significant factors affecting force variance. Six input parameters, including centre pitch, outer pitch, balance angle, shift angle, set angle and relative angle were tested at two levels. Trials were configured using the Design of Experiments (DoE) method where 2{sup 6} full-factorial experimental design was selected to investigate the effect of these factors on force variance. Results from the analysis show that all parameters except balance angle, as well as their interactions, significantly affect the force variance.

  18. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  19. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  20. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  1. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  2. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  3. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  4. Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.

    Science.gov (United States)

    Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong

    2018-03-01

    Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.

  5. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  6. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  7. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    Science.gov (United States)

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  8. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  9. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  10. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    Science.gov (United States)

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  11. Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Anna A. Igolkina

    2018-06-01

    Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology

  12. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  13. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    Science.gov (United States)

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  14. Use of hypotheses for analysis of variance Models: Challenging the current practice

    NARCIS (Netherlands)

    van Wesel, F.; Boeije, H.R.; Hoijtink, H

    2013-01-01

    In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current

  15. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  16. WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.

    Science.gov (United States)

    Grech, Victor

    2018-06-01

    Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Analysis of rhythmic variance - ANORVA. A new simple method for detecting rhythms in biological time series

    Directory of Open Access Journals (Sweden)

    Peter Celec

    2004-01-01

    Full Text Available Cyclic variations of variables are ubiquitous in biomedical science. A number of methods for detecting rhythms have been developed, but they are often difficult to interpret. A simple procedure for detecting cyclic variations in biological time series and quantification of their probability is presented here. Analysis of rhythmic variance (ANORVA is based on the premise that the variance in groups of data from rhythmic variables is low when a time distance of one period exists between the data entries. A detailed stepwise calculation is presented including data entry and preparation, variance calculating, and difference testing. An example for the application of the procedure is provided, and a real dataset of the number of papers published per day in January 2003 using selected keywords is compared to randomized datasets. Randomized datasets show no cyclic variations. The number of papers published daily, however, shows a clear and significant (p<0.03 circaseptan (period of 7 days rhythm, probably of social origin

  18. Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model

    International Nuclear Information System (INIS)

    Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.

    2012-01-01

    This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.

  19. Less Is More: Results of a Statewide Analysis of the Impact of Blood Transfusion on Coronary Artery Bypass Grafting Outcomes.

    Science.gov (United States)

    Crawford, Todd C; Magruder, J Trent; Fraser, Charles; Suarez-Pierre, Alejandro; Alejo, Diane; Bobbitt, Jennifer; Fonner, Clifford E; Canner, Joseph K; Horvath, Keith; Wehberg, Kurt; Taylor, Bradley; Kwon, Christopher; Whitman, Glenn J; Conte, John V; Salenger, Rawn

    2018-01-01

    Debate persists over the association between blood transfusions, especially those considered discretionary, and outcomes after cardiac operations. Using data from the Maryland Cardiac Surgery Quality Initiative, we sought to determine whether outcomes differed among coronary artery bypass grafting (CABG) patients receiving 1 U of red blood cells (RBCs) vs none. We used a statewide database to review patients who underwent isolated CABG from July 1, 2011, to June 30, 2016, across 10 Maryland cardiac surgery centers. We included patients who received 1 U or fewer of RBCs from the time of the operation through discharge. Propensity scoring, using 20 variables to control for treatment effect, was performed among patients who did and did not receive a transfusion. These two groups were matched 1:1 to assess for differences in our primary outcomes: operative death, prolonged postoperative length of stay (>14 days), and a composite postoperative respiratory complication of pneumonia or reintubation, or both. Of 10,877 patients who underwent CABG, 6,124 (56%) received no RBCs (group 1) during their operative hospitalization, and 981 (9.0%) received 1 U of RBCs (group 2), including 345 of 981 patients (35%) who received a transfusion intraoperatively. Propensity score matching generated 937 well-matched pairs. Compared with group 2, propensity-matched analysis revealed significantly greater 30-day survival in group 1 (99% vs 98%, p = 0.02) and reduced incidence of prolonged length of stay (3.7% vs 4.0%, p < 0.01). Our collaborative statewide analysis demonstrated that even 1 unit of blood was associated with significantly worse survival and longer length of stay after CABG. Multiinstitutional quality initiatives may seek to address discretionary transfusions and possess the potential to improve patient outcomes. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  1. Principal variance component analysis of crop composition data: a case study on herbicide-tolerant cotton.

    Science.gov (United States)

    Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D

    2013-07-03

    Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.

  2. Study on Analysis of Variance on the indigenous wild and cultivated rice species of Manipur Valley

    Science.gov (United States)

    Medhabati, K.; Rohinikumar, M.; Rajiv Das, K.; Henary, Ch.; Dikash, Th.

    2012-10-01

    The analysis of variance revealed considerable variation among the cultivars and the wild species for yield and other quantitative characters in both the years of investigation. The highly significant differences among the cultivars in year wise and pooled analysis of variance for all the 12 characters reveal that there are enough genetic variabilities for all the characters studied. The existence of genetic variability is of paramount importance for starting a judicious plant breeding programme. Since introduced high yielding rice cultivars usually do not perform well. Improvement of indigenous cultivars is a clear choice for increase of rice production. The genetic variability of 37 rice germplasms in 12 agronomic characters estimated in the present study can be used in breeding programme

  3. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    Science.gov (United States)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  4. Analysis of Molecular Variance Inferred from Metric Distances among DNA Haplotypes: Application to Human Mitochondrial DNA Restriction Data

    OpenAIRE

    Excoffier, L.; Smouse, P. E.; Quattro, J. M.

    1992-01-01

    We present here a framework for the study of molecular variation within a single species. Information on DNA haplotype divergence is incorporated into an analysis of variance format, derived from a matrix of squared-distances among all pairs of haplotypes. This analysis of molecular variance (AMOVA) produces estimates of variance components and F-statistic analogs, designated here as φ-statistics, reflecting the correlation of haplotypic diversity at different levels of hierarchical subdivisi...

  5. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  6. The Importance of Variance in Statistical Analysis: Don't Throw Out the Baby with the Bathwater.

    Science.gov (United States)

    Peet, Martha W.

    This paper analyzes what happens to the effect size of a given dataset when the variance is removed by categorization for the purpose of applying "OVA" methods (analysis of variance, analysis of covariance). The dataset is from a classic study by Holzinger and Swinefors (1939) in which more than 20 ability test were administered to 301…

  7. Statewide Suicide Prevention Council

    Science.gov (United States)

    State Employees Statewide Suicide Prevention Council DHSS State of Alaska Home Divisions and Agencies National Suicide Prevention Lifeline Alaska Community Mental Health Centers National Survivors of Suicide Meetings Presentations 2010 Alaska Statewide Suicide Prevention Summit: Mending the Net Connect with us on

  8. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    Science.gov (United States)

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  10. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  11. Improved analysis of all-sky meteor radar measurements of gravity wave variances and momentum fluxes

    Directory of Open Access Journals (Sweden)

    V. F. Andrioli

    2013-05-01

    Full Text Available The advantages of using a composite day analysis for all-sky interferometric meteor radars when measuring mean winds and tides are widely known. On the other hand, problems arise if this technique is applied to Hocking's (2005 gravity wave analysis for all-sky meteor radars. In this paper we describe how a simple change in the procedure makes it possible to use a composite day in Hocking's analysis. Also, we explain how a modified composite day can be constructed to test its ability to measure gravity wave momentum fluxes. Test results for specified mean, tidal, and gravity wave fields, including tidal amplitudes and gravity wave momentum fluxes varying strongly with altitude and/or time, suggest that the modified composite day allows characterization of monthly mean profiles of the gravity wave momentum fluxes, with good accuracy at least at the altitudes where the meteor counts are large (from 89 to 92.5 km. In the present work we also show that the variances measured with Hocking's method are often contaminated by the tidal fields and suggest a method of empirical correction derived from a simple simulation model. The results presented here greatly increase our confidence because they show that our technique is able to remove the tide-induced false variances from Hocking's analysis.

  12. [Analysis of variance of repeated data measured by water maze with SPSS].

    Science.gov (United States)

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (PSPSS statistical package is available to fulfil this process.

  13. RepExplore: addressing technical replicate variance in proteomics and metabolomics data analysis.

    Science.gov (United States)

    Glaab, Enrico; Schneider, Reinhard

    2015-07-01

    High-throughput omics datasets often contain technical replicates included to account for technical sources of noise in the measurement process. Although summarizing these replicate measurements by using robust averages may help to reduce the influence of noise on downstream data analysis, the information on the variance across the replicate measurements is lost in the averaging process and therefore typically disregarded in subsequent statistical analyses.We introduce RepExplore, a web-service dedicated to exploit the information captured in the technical replicate variance to provide more reliable and informative differential expression and abundance statistics for omics datasets. The software builds on previously published statistical methods, which have been applied successfully to biomedical omics data but are difficult to use without prior experience in programming or scripting. RepExplore facilitates the analysis by providing a fully automated data processing and interactive ranking tables, whisker plot, heat map and principal component analysis visualizations to interpret omics data and derived statistics. Freely available at http://www.repexplore.tk enrico.glaab@uni.lu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  14. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    Science.gov (United States)

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  15. An Efficient SDN Load Balancing Scheme Based on Variance Analysis for Massive Mobile Users

    Directory of Open Access Journals (Sweden)

    Hong Zhong

    2015-01-01

    Full Text Available In a traditional network, server load balancing is used to satisfy the demand for high data volumes. The technique requires large capital investment while offering poor scalability and flexibility, which difficultly supports highly dynamic workload demands from massive mobile users. To solve these problems, this paper analyses the principle of software-defined networking (SDN and presents a new probabilistic method of load balancing based on variance analysis. The method can be used to dynamically manage traffic flows for supporting massive mobile users in SDN networks. The paper proposes a solution using the OpenFlow virtual switching technology instead of the traditional hardware switching technology. A SDN controller monitors data traffic of each port by means of variance analysis and provides a probability-based selection algorithm to redirect traffic dynamically with the OpenFlow technology. Compared with the existing load balancing methods which were designed to support traditional networks, this solution has lower cost, higher reliability, and greater scalability which satisfy the needs of mobile users.

  16. A comparison of two follow-up analyses after multiple analysis of variance, analysis of variance, and descriptive discriminant analysis: A case study of the program effects on education-abroad programs

    Science.gov (United States)

    Alvin H. Yu; Garry. Chick

    2010-01-01

    This study compared the utility of two different post-hoc tests after detecting significant differences within factors on multiple dependent variables using multivariate analysis of variance (MANOVA). We compared the univariate F test (the Scheffé method) to descriptive discriminant analysis (DDA) using an educational-tour survey of university study-...

  17. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  18. Shutdown dose rate analysis with CAD geometry, Cartesian/tetrahedral mesh, and advanced variance reduction

    International Nuclear Information System (INIS)

    Biondo, Elliott D.; Davis, Andrew; Wilson, Paul P.H.

    2016-01-01

    Highlights: • A CAD-based shutdown dose rate analysis workflow has been implemented. • Cartesian and superimposed tetrahedral mesh are fully supported. • Biased and unbiased photon source sampling options are available. • Hybrid Monte Carlo/deterministic techniques accelerate photon transport. • The workflow has been validated with the FNG-ITER benchmark problem. - Abstract: In fusion energy systems (FES) high-energy neutrons born from burning plasma activate system components to form radionuclides. The biological dose rate that results from photons emitted by these radionuclides after shutdown—the shutdown dose rate (SDR)—must be quantified for maintenance planning. This can be done using the Rigorous Two-Step (R2S) method, which involves separate neutron and photon transport calculations, coupled by a nuclear inventory analysis code. The geometric complexity and highly attenuating configuration of FES motivates the use of CAD geometry and advanced variance reduction for this analysis. An R2S workflow has been created with the new capability of performing SDR analysis directly from CAD geometry with Cartesian or tetrahedral meshes and with biased photon source sampling, enabling the use of the Consistent Adjoint Driven Importance Sampling (CADIS) variance reduction technique. This workflow has been validated with the Frascati Neutron Generator (FNG)-ITER SDR benchmark using both Cartesian and tetrahedral meshes and both unbiased and biased photon source sampling. All results are within 20.4% of experimental values, which constitutes satisfactory agreement. Photon transport using CADIS is demonstrated to yield speedups as high as 8.5·10"5 for problems using the FNG geometry.

  19. Adjoint-based global variance reduction approach for reactor analysis problems

    International Nuclear Information System (INIS)

    Zhang, Qiong; Abdel-Khalik, Hany S.

    2011-01-01

    A new variant of a hybrid Monte Carlo-Deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted by the Subspace approach, optimizes the selection of the weight windows for reactor analysis problems where detailed properties of all fuel assemblies are required everywhere in the reactor core. Like the FW-CADIS approach, the Subspace approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. In contrast to FW-CADIS, the Subspace approach identifies the correlations between weight window maps to minimize the computational time required for global variance reduction, i.e., when the solution is required everywhere in the phase space. The correlations are employed to reduce the number of maps required to achieve the same level of variance reduction that would be obtained with single-response maps. Numerical experiments, serving as proof of principle, are presented to compare the Subspace and FW-CADIS approaches in terms of the global reduction in standard deviation. (author)

  20. Semiautomated analysis of embryoscope images: Using localized variance of image intensity to detect embryo developmental stages.

    Science.gov (United States)

    Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester

    2015-02-01

    Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. © 2015 International

  1. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  2. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  3. Incidence and Risk Factors for Blood Transfusion in Total Joint Arthroplasty: Analysis of a Statewide Database.

    Science.gov (United States)

    Slover, James; Lavery, Jessica A; Schwarzkopf, Ran; Iorio, Richard; Bosco, Joseph; Gold, Heather T

    2017-09-01

    Significant attempts have been made to adopt practices to minimize blood transfusion after total joint arthroplasty (TJA) because of transfusion cost and potential negative clinical consequences including allergic reactions, transfusion-related lung injuries, and immunomodulatory effects. We aimed to evaluate risk factors for blood transfusion in a large cohort of TJA patients. We used the all-payer California Healthcare Cost and Utilization Project data from 2006 to 2011 to examine the trends in utilization of blood transfusion among arthroplasty patients (n = 320,746). We performed descriptive analyses and multivariate logistic regression clustered by hospital, controlling for Deyo-Charlson comorbidity index, age, insurance type (Medicaid vs others), gender, procedure year, and race/ethnicity. Eighteen percent (n = 59,038) of TJA patients underwent blood transfusion during their surgery, from 15% with single knee to 45% for bilateral hip arthroplasty. Multivariate analysis indicated that compared with the referent category of single knee arthroplasty, single hip had a significantly higher odds of blood transfusion (odds ratio [OR], 1.76; 95% confidence interval [CI], 1.68-1.83), as did bilateral knee (OR, 3.57; 95% CI, 3.20-3.98) and bilateral hip arthroplasty (OR, 6.17; 95% CI, 4.85-7.85). Increasing age (eg, age ≥80 years; OR, 2.99; 95% CI, 2.82-3.17), Medicaid insurance (OR, 1.36; 95% CI, 1.27-1.45), higher comorbidity index (eg, score of ≥3; OR, 2.33; 95% CI, 2.22-2.45), and females (OR, 1.75; 95% CI, 1.70-1.80) all had significantly higher odds of blood transfusion after TJA. Primary hip arthroplasties have significantly greater risk of transfusion than knee arthroplasties, and bilateral procedures have even greater risk, especially for hips. These factors should be considered when evaluating the risk for blood transfusions. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  5. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    Energy Technology Data Exchange (ETDEWEB)

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  6. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  7. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    Science.gov (United States)

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  8. GPR image analysis to locate water leaks from buried pipes by applying variance filters

    Science.gov (United States)

    Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín

    2018-05-01

    Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.

  9. FRAC (failure rate analysis code): a computer program for analysis of variance of failure rates. An application user's guide

    International Nuclear Information System (INIS)

    Martz, H.F.; Beckman, R.J.; McInteer, C.R.

    1982-03-01

    Probabilistic risk assessments (PRAs) require estimates of the failure rates of various components whose failure modes appear in the event and fault trees used to quantify accident sequences. Several reliability data bases have been designed for use in providing the necessary reliability data to be used in constructing these estimates. In the nuclear industry, the Nuclear Plant Reliability Data System (NPRDS) and the In-Plant Reliability Data System (IRPDS), among others, were designed for this purpose. An important characteristic of such data bases is the selection and identification of numerous factors used to classify each component that is reported and the subsequent failures of each component. However, the presence of such factors often complicates the analysis of reliability data in the sense that it is inappropriate to group (that is, pool) data for those combinations of factors that yield significantly different failure rate values. These types of data can be analyzed by analysis of variance. FRAC (Failure Rate Analysis Code) is a computer code that performs an analysis of variance of failure rates. In addition, FRAC provides failure rate estimates

  10. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    Noack, K.

    1982-01-01

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  11. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  12. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  13. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  14. CAIXA: a catalogue of AGN in the XMM-Newton archive. III. Excess variance analysis

    NARCIS (Netherlands)

    Ponti, G.; Papadakis, I.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.

    2012-01-01

    Context. We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10 ks in pointed observations, which is the largest sample used so far to study

  15. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  16. Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

    Directory of Open Access Journals (Sweden)

    David P. Griesheimer

    2017-09-01

    Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

  17. Analysis of the effectiveness of the variance and Downside Risk measures for formation of investment portfolios

    Directory of Open Access Journals (Sweden)

    Mariúcha Nóbrega Bezerra

    2016-09-01

    Full Text Available This paper aims to analyze the efficacy of variance and measures of downside risk for of formation of investment portfolios in the Brazilian stock market. Using the methodologies of Ang (1975, Markowitz et al. (1993, Ballestero (2005, Estrada (2008 and Cumova and Nawrocki (2011, sought to find what the best method to solve the problem of asymmetric and endogenous matrix and, inspired by the work of Markowitz (1952 and Lohre, Neumann and Winterfeldt (2010, intended to be seen which risk metric is most suitable for the realization of more efficient allocation of resources in the stock market in Brazil. The sample was composed of stocks of IBrX 50, from 2000 to 2013. The results indicated that when the semivariance was used as a measure of asymmetric risk, if the investor can use more refined models for solving the problem of asymmetric semivariance-cosemivariance matrix, the model of Cumova and Nawrocki (2011 will be more effective. Furthermore, from the Brazilian data, VaR had become more effective than variance and other measures of downside risk with respect to minimizing the risk of loss. Thus, taken the assumption that the investor has asymmetric preferences regarding risk, forming portfolios of stocks in the Brazilian market is more efficient when using criteria of minimizing downside risk than the traditional mean-variance approach.

  18. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. The mean–variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI

    Science.gov (United States)

    Thompson, William H.; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed. PMID:26236216

  20. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    Science.gov (United States)

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  1. Use of a Novel Accounting and Grouping Method for Major Trunk Injury-Analysis of Data from a Statewide Trauma Financial Survey.

    Science.gov (United States)

    Joubert, Kyla D; Mabry, Charles D; Kalkwarf, Kyle J; Betzold, Richard D; Spencer, Horace J; Spinks, Kara M; Porter, Austin; Karim, Saleema; Robertson, Ronald D; Sutherland, Michael J; Maxson, Robert T

    2016-09-01

    Major trunk trauma is common and costly, but comparisons of costs between trauma centers (TCs) are rare. Understanding cost is essential to improve quality, manage trauma service lines, and to facilitate institutional commitment for trauma. We have used results of a statewide trauma financial survey of Levels I to IV TC to develop a useful grouping method for costs and clinical characteristics of major trunk trauma. The trauma financial survey collected billing and clinical data on 75 per cent of the state trauma registry patients for fiscal year 2012. Cost was calculated by separately accounting for embedded costs of trauma response and verification, and then adjusting reasonable costs from the Medicare cost report for each TC. The cost-to-charge ratios were then recalculated and used to determine uniform cost estimates for each patient. From the 13,215 patients submitted for the survey, we selected 1,094 patients with major trunk trauma: lengths of stay ≥ 48 hours and a maximum injury of AIS ≥3 for either thorax or abdominal trauma. These patients were then divided into three Injury Severity Score (ISS) groups of 9 to 15, 16 to 24, or 25+ to stratify patients into similar injury groups for analysis of cost and cost drivers. For abdominal injury, average total cost for patients with ISS 9 to 15 was $17,429. Total cost and cost per day increased with severity of injury, with $51,585 being the total cost for those with ISS 25. Similar trends existed for thoracic injury. Use of the Medicare cost report and cost-to-charge ratios to compute uniform costs with an innovative grouping method applied to data collected across a statewide trauma system provides unique information regarding cost and outcomes, which affects quality improvement, trauma service line management, and decisions on TC participation.

  2. A Discussion Concerning the Inclusion of Variety Effect when Analysis of Variance is Used to Detect Differentially Expressed Genes

    Directory of Open Access Journals (Sweden)

    Guri Feten

    2007-01-01

    Full Text Available In microarray studies several statistical methods have been proposed with the purpose of identifying differentially expressed genes in two varieties. A commonly used method is an analysis of variance model where only the effect of interaction between variety and gene is tested. In this paper we argue that in addition to the interaction effects, the main effect of variety should simultaneously also be taken into account when posting the hypothesis.

  3. Longitudinal Analysis of Residual Feed Intake in Mink using Random Regression with Heterogeneous Residual Variance

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik

    Heritability of residual feed intake (RFI) increased from low to high over the growing period in male and female mink. The lowest heritability for RFI (male: 0.04 ± 0.01 standard deviation (SD); female: 0.05 ± 0.01 SD) was in early and the highest heritability (male: 0.33 ± 0.02; female: 0.34 ± 0.......02 SD) was achieved at the late growth stages. The genetic correlation between different growth stages for RFI showed a high association (0.91 to 0.98) between early and late growing periods. However, phenotypic correlations were lower from 0.29 to 0.50. The residual variances were substantially higher...

  4. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Yu, Zuwei

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets

  5. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Zuwei Yu

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  6. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zuwei Yu [Purdue University, West Lafayette, IN (United States). Indiana State Utility Forecasting Group and School of Industrial Engineering

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  7. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zuwei [Indiana State Utility Forecasting Group and School of Industrial Engineering, Purdue University, Room 334, 1293 A.A. Potter, West Lafayette, IN 47907 (United States)

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets.

  8. A Framework for Statewide Analysis of Site Suitability, Energy Estimation, Life Cycle Costs, Financial Feasibility and Environmental Assessment of Wind Farms: A Case Study of Indiana

    Science.gov (United States)

    Kumar, Indraneel

    In the last decade, Midwestern states including Indiana have experienced an unprecedented growth in utility scale wind energy farms. For example, by end of 2013, Indiana had 1.5 GW of wind turbines installed, which could provide electrical energy for as many as half-a-million homes. However, there is no statewide systematic framework available for the evaluation of wind farm impacts on endangered species, required necessary setbacks and proximity standards to infrastructure, and life cycle costs. This research is guided to fill that gap and it addresses the following questions. How much land is suitable for wind farm siting in Indiana given the constraints of environmental, ecological, cultural, settlement, physical infrastructure and wind resource parameters? How much wind energy can be obtained? What are the life cycle costs and economic and financial feasibility? Is wind energy production and development in a state an emission free undertaking? The framework developed in the study is applied to a case study of Indiana. A fuzzy logic based AHP (Analytic Hierarchy Process) spatial site suitability analysis for wind energy is formulated. The magnitude of wind energy that could be sited and installed comprises input for economic and financial feasibility analysis for 20-25 years life cycle of wind turbines in Indiana. Monte Carlo simulation is used to account for uncertainty and nonlinearity in various costs and price parameters. Impacts of incentives and cost variables such as production tax credits, costs of capital, and economies of scale are assessed. Further, an economic input-output (IO) based environmental assessment model is developed for wind energy, where costs from financial feasibility analysis constitute the final demand vectors. This customized model for Indiana is used to assess emissions for criteria air pollutants, hazardous air pollutants and greenhouse gases (GHG) across life cycle events of wind turbines. The findings of the case study include

  9. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    Science.gov (United States)

    Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  10. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  11. Detecting and accounting for multiple sources of positional variance in peak list registration analysis and spin system grouping.

    Science.gov (United States)

    Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B

    2017-08-01

    Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the

  12. Sex versus asex: An analysis of the role of variance conversion.

    Science.gov (United States)

    Lewis-Pye, Andrew E M; Montalbán, Antonio

    2017-04-01

    The question as to why most complex organisms reproduce sexually remains a very active research area in evolutionary biology. Theories dating back to Weismann have suggested that the key may lie in the creation of increased variability in offspring, causing enhanced response to selection. Under appropriate conditions, selection is known to result in the generation of negative linkage disequilibrium, with the effect of recombination then being to increase genetic variance by reducing these negative associations between alleles. It has therefore been a matter of significant interest to understand precisely those conditions resulting in negative linkage disequilibrium, and to recognise also the conditions in which the corresponding increase in genetic variation will be advantageous. Here, we prove rigorous results for the multi-locus case, detailing the build up of negative linkage disequilibrium, and describing the long term effect on population fitness for models with and without bounds on fitness contributions from individual alleles. Under the assumption of large but finite bounds on fitness contributions from alleles, the non-linear nature of the effect of recombination on a population presents serious obstacles in finding the genetic composition of populations at equilibrium, and in establishing convergence to those equilibria. We describe techniques for analysing the long term behaviour of sexual and asexual populations for such models, and use these techniques to establish conditions resulting in higher fitnesses for sexually reproducing populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    International Nuclear Information System (INIS)

    Westner, Guenther; Madlener, Reinhard

    2010-01-01

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  14. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    Energy Technology Data Exchange (ETDEWEB)

    Westner, Guenther; Madlener, Reinhard [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany)

    2010-12-15

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  15. Variance analysis of x-ray CT sinograms in the presence of electronic noise background.

    Science.gov (United States)

    Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing

    2012-07-01

    Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean-variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. The influence of the electronic noise background should be considered in low-dose CT imaging.

  16. Mapping one strong 'Ohana: using network analysis and GIS to enhance the effectiveness of a statewide coalition to prevent child abuse and neglect.

    Science.gov (United States)

    Cardazone, Gina; U Sy, Angela; Chik, Ivan; Corlew, Laura Kate

    2014-06-01

    Network analysis and GIS enable the presentation of meaningful data about organizational relationships and community characteristics, respectively. Together, these tools can provide a concrete representation of the ecological context in which coalitions operate, and may help coalitions identify opportunities for growth and enhanced effectiveness. This study uses network analysis and GIS mapping as part of an evaluation of the One Strong 'Ohana (OSO) campaign. The OSO campaign was launched in 2012 via a partnership between the Hawai'i Children's Trust Fund (HCTF) and the Joyful Heart Foundation. The OSO campaign uses a collaborative approach aimed at increasing public awareness of child maltreatment and protective factors that can prevent maltreatment, as well as enhancing the effectiveness of the HCTF Coalition. This study focuses on three elements of the OSO campaign evaluation: (1) Network analysis exploring the relationships between 24 active Coalition member organizations, (2) GIS mapping of responses to a randomized statewide phone survey (n = 1,450) assessing awareness of factors contributing to child maltreatment, and (3) Combined GIS maps and network data, illustrating opportunities for geographically-targeted coalition building and public awareness activities.

  17. Role of Surgical Services in Profitability of Hospitals in California: An Analysis of Office of Statewide Health Planning and Development Annual Financial Data.

    Science.gov (United States)

    Moazzez, Ashkan; de Virgilio, Christian

    2016-10-01

    With constant changes in health-care laws and payment methods, profitability, and financial sustainability of hospitals are of utmost importance. The purpose of this study is to determine the relationship between surgical services and hospital profitability. The Office of Statewide Health Planning and Development annual financial databases for the years 2009 to 2011 were used for this study. The hospitals' characteristics and income statement elements were extracted for statistical analysis using bivariate and multivariate linear regression. A total of 989 financial records of 339 hospitals were included. On bivariate analysis, the number of inpatient and ambulatory operating rooms (ORs), the number of cases done both as inpatient and outpatient in each OR, and the average minutes used in inpatient ORs were significantly related with the net income of the hospital. On multivariate regression analysis, when controlling for hospitals' payer mix and the study year, only the number of inpatient cases done in the inpatient ORs (β = 832, P = 0.037), and the number of ambulatory ORs (β = 1,485, 466, P = 0.001) were significantly related with the net income of the hospital. These findings suggest that hospitals can maximize their profitability by diverting and allocating outpatient surgeries to ambulatory ORs, to allow for more inpatient surgeries.

  18. Variance Component Quantitative Trait Locus Analysis for Body Weight Traits in Purebred Korean Native Chicken

    Directory of Open Access Journals (Sweden)

    Muhammad Cahyadi

    2016-01-01

    Full Text Available Quantitative trait locus (QTL is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC. F1 samples (n = 595 were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3 for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001 and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003. Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007 and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027 were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds.

  19. SU-E-T-41: Analysis of GI Dose Variability Due to Intrafraction Setup Variance

    International Nuclear Information System (INIS)

    Phillips, J; Wolfgang, J

    2014-01-01

    Purpose: Proton SBRT (stereotactic body radiation therapy) can be an effective modality for treatment of gastrointestinal tumors, but limited in practice due to sensitivity with respect to variation in the RPL (radiological path length). Small, intrafractional shifts in patient anatomy can lead to significant changes in the dose distribution. This study describes a tool designed to visualize uncertainties in radiological depth in patient CT's and aid in treatment plan design. Methods: This project utilizes the Shadie toolkit, a GPU-based framework that allows for real-time interactive calculations for volume visualization. Current SBRT simulation practice consists of a serial CT acquisition for the assessment of inter- and intra-fractional motion utilizing patient specific immobilization systems. Shadie was used to visualize potential uncertainties, including RPL variance and changes in gastric content. Input for this procedure consisted of two patient CT sets, contours of the desired organ, and a pre-calculated dose. In this study, we performed rigid registrations between sets of 4DCT's obtained from a patient with varying setup conditions. Custom visualizations are written by the user in Shadie, permitting one to create color-coded displays derived from a calculation along each ray. Results: Serial CT data acquired on subsequent days was analyzed for variation in RPB and gastric content. Specific shaders were created to visualize clinically relevant features, including RPL (radiological path length) integrated up to organs of interest. Using pre-calculated dose distributions and utilizing segmentation masks as additional input allowed us to further refine the display output from Shadie and create tools suitable for clinical usage. Conclusion: We have demonstrated a method to visualize potential uncertainty for intrafractional proton radiotherapy. We believe this software could prove a useful tool to guide those looking to design treatment plans least

  20. The benefit of regional diversification of cogeneration investments in Europe: A mean-variance portfolio analysis

    Energy Technology Data Exchange (ETDEWEB)

    Westner, Guenther, E-mail: guenther.westner@eon-energie.co [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany); Madlener, Reinhard, E-mail: rmadlener@eonerc.rwth-aachen.d [Institute for Future Energy Consumer Needs and Behavior (FCN), Faculty of Business and Economics/E.ON Energy Research Center, RWTH Aachen University, Mathieustrasse 6, 52074 Aachen (Germany)

    2010-12-15

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. - Research highlights: {yields}Preconditions for CHP investments differ significantly between the EU member states. {yields}Regional diversification of CHP investments can reduce the total portfolio risk. {yields}Risk reduction depends on the chosen CHP technology.

  1. Louisiana's statewide beach cleanup

    Science.gov (United States)

    Lindstedt, Dianne M.; Holmes, Joseph C.

    1989-01-01

    Litter along Lousiana's beaches has become a well-recognized problem. In September 1987, Louisiana's first statewide beach cleanup attracted about 3300 volunteers who filled 16,000 bags with trash collected along 15 beaches. An estimated 800,173 items were gathered. Forty percent of the items were made of plastic and 11% were of polystyrene. Of all the litter collected, 37% was beverage-related. Litter from the oil and gas, commercial fishing, and maritime shipping industries was found, as well as that left by recreational users. Although beach cleanups temporarily rid Louisiana beaches of litter, the real value of the effort is in public participation and education. Civic groups, school children, and individuals have benefited by increasing their awareness of the problems of trash disposal.

  2. Analysis of covariance with pre-treatment measurements in randomized trials under the cases that covariances and post-treatment variances differ between groups.

    Science.gov (United States)

    Funatogawa, Takashi; Funatogawa, Ikuko; Shyr, Yu

    2011-05-01

    When primary endpoints of randomized trials are continuous variables, the analysis of covariance (ANCOVA) with pre-treatment measurements as a covariate is often used to compare two treatment groups. In the ANCOVA, equal slopes (coefficients of pre-treatment measurements) and equal residual variances are commonly assumed. However, random allocation guarantees only equal variances of pre-treatment measurements. Unequal covariances and variances of post-treatment measurements indicate unequal slopes and, usually, unequal residual variances. For non-normal data with unequal covariances and variances of post-treatment measurements, it is known that the ANCOVA with equal slopes and equal variances using an ordinary least-squares method provides an asymptotically normal estimator for the treatment effect. However, the asymptotic variance of the estimator differs from the variance estimated from a standard formula, and its property is unclear. Furthermore, the asymptotic properties of the ANCOVA with equal slopes and unequal variances using a generalized least-squares method are unclear. In this paper, we consider non-normal data with unequal covariances and variances of post-treatment measurements, and examine the asymptotic properties of the ANCOVA with equal slopes using the variance estimated from a standard formula. Analytically, we show that the actual type I error rate, thus the coverage, of the ANCOVA with equal variances is asymptotically at a nominal level under equal sample sizes. That of the ANCOVA with unequal variances using a generalized least-squares method is asymptotically at a nominal level, even under unequal sample sizes. In conclusion, the ANCOVA with equal slopes can be asymptotically justified under random allocation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  4. The Requirement of a Positive Definite Covariance Matrix of Security Returns for Mean-Variance Portfolio Analysis: A Pedagogic Illustration

    Directory of Open Access Journals (Sweden)

    Clarence C. Y. Kwan

    2010-07-01

    Full Text Available This study considers, from a pedagogic perspective, a crucial requirement for the covariance matrix of security returns in mean-variance portfolio analysis. Although the requirement that the covariance matrix be positive definite is fundamental in modern finance, it has not received any attention in standard investment textbooks. Being unaware of the requirement could cause confusion for students over some strange portfolio results that are based on seemingly reasonable input parameters. This study considers the requirement both informally and analytically. Electronic spreadsheet tools for constrained optimization and basic matrix operations are utilized to illustrate the various concepts involved.

  5. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  6. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  7. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

    DEFF Research Database (Denmark)

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M

    2014-01-01

    Exploratory (i.e., voxelwise) spatial methods are commonly used in neuroimaging to identify areas that show an effect when a region-of-interest (ROI) analysis cannot be performed because no strong a priori anatomical hypothesis exists. However, noise at a single voxel is much higher than noise...... in a ROI making noise management critical to successful exploratory analysis. This work explores how preprocessing choices affect the bias and variability of voxelwise kinetic modeling analysis of brain positron emission tomography (PET) data. These choices include the use of volume- or cortical surface...

  8. Alabama statewide mobility report, 2014.

    Science.gov (United States)

    2015-09-01

    This Alabama Statewide Mobility Report for 2014 is a new way to analyze interstate mobility performance over an entire year. Over half a billion speed records were acquired, stored, and analyzed for this report. These observations capture recurring c...

  9. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  10. An Empirical Study Based on the SPSS Variance Analysis of College Teachers' Sports Participation and Satisfaction

    OpenAIRE

    Yunqiu Liang

    2013-01-01

    The study on University Teachers ' sports participation and their job satisfaction relationship for empirical research, mainly from the group to participate in sports activities situation on the object of study, investigation and mathematical statistics analysis SPSS. Results show that sports groups participate in job satisfaction higher than those in groups of job satisfaction; sports participation, different job satisfaction is also different. Recommendations for college teachers to address...

  11. Data analysis and approximate models model choice, location-scale, analysis of variance, nonparametric regression and image analysis

    CERN Document Server

    Davies, Patrick Laurie

    2014-01-01

    Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...

  12. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares

    International Nuclear Information System (INIS)

    Boccard, Julien; Rudaz, Serge

    2016-01-01

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of experiments

  13. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares

    Energy Technology Data Exchange (ETDEWEB)

    Boccard, Julien, E-mail: julien.boccard@unige.ch; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of

  14. An Analysis of the Factors Generating the Variance Between the Budgeted and Actual Operating Results of the Naval Aviation Depot at North Island, California

    National Research Council Canada - National Science Library

    Curran, Thomas; Schimpff, Joshua J

    2008-01-01

    .... The variance analysis between budgeted (projected) and actual financial results was performed on financial data collected on the E-2C aircraft program from Fleet Readiness Center Southwest (FRCSW...

  15. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    Science.gov (United States)

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  16. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment

    Directory of Open Access Journals (Sweden)

    Mahmoud Shakouri

    2016-03-01

    Full Text Available The amount of electricity generated by Photovoltaic (PV systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in Supplementary materials. The application of these files can be generalized to variety of communities interested in investing on PV systems. Keywords: Community solar, Photovoltaic system, Portfolio theory, Energy optimization, Electricity volatility

  17. 使用SPSS软件进行多因素方差分析%Application of SPSS Software in Multivariate Analysis of Variance

    Institute of Scientific and Technical Information of China (English)

    龚江; 石培春; 李春燕

    2012-01-01

    以两因素完全随机有重复的试验为例,阐述用SPSS软进行方差分析的详细过程,包括数据的输入、变异来源的分析,方差分析结果,以及显著性检验,最后还对方差分析注意事项进行分析,为科技工作者使用SPSS软进方差分析提供参考。%An example about two factors multiple completely random design analysis of variance was given and the detailed process of analysis of variance in SPSS software was elaborated,including the data input,he source analysis of the variance,the result of analysis of variance,the test of significance,etc.At last,precautions on the analysis of variance with SPSS software were given,providing references to the analysis of variance with SPSS software for scientific research workers.

  18. 90-day Readmission After Lumbar Spinal Fusion Surgery in New York State Between 2005 and 2014: A 10-year Analysis of a Statewide Cohort.

    Science.gov (United States)

    Baaj, Ali A; Lang, Gernot; Hsu, Wei-Chun; Avila, Mauricio J; Mao, Jialin; Sedrakyan, Art

    2017-11-15

    MINI: We assessed 90-day readmission and evaluated risk factors associated with readmission after lumbar spinal fusion surgery in New York State. The overall 90-day readmission rate was 24.8%. Age, sex, race, insurance, procedure, number of operated spinal levels, health service area, and comorbidities are major risk factors for 90-day readmission. Retrospective cohort study. The aim of this study was to assess 90-day readmission and evaluate risk factors associated with readmission after lumbar fusion in New York State. Readmission is becoming an important metric for quality and efficiency of health care. Readmission and its predictors following spine surgery are overall poorly understood and limited evidence is available specifically in lumbar fusion. The New York Statewide Planning and Research Cooperative System (SPARCS) was utilized to capture patients undergoing lumbar fusion from 2005 to 2014. Temporal trend of 90-day readmission was assessed using Cochran-Armitage test. Logistic regression was used to examine predictors associated with 90-day readmission. There were 86,869 patients included in this cohort study. The overall 90-day readmission rate was 24.8%. On a multivariable analysis model, age (odds ratio [OR] comparing ≥75 versus New York-Pennsylvania border: 0.67, 95% CI: 0.61-0.73), and comorbidity, i.e., coronary artery disease (OR: 1.26, 95% CI: 1.19-1.33) were significantly associated with 90-day readmission. Directions of the odds ratios for these factors were consistent after stratification by procedure type. Age, sex, race, insurance, procedure, number of operated spinal levels, HSA, and comorbidities are major risk factors for 90-day readmission. Our study allows risk calculation to determine high-risk patients before undergoing spinal fusion surgery to prevent early readmission, improve quality of care, and reduce health care expenditures. 3.

  19. Combining analysis of variance and three‐way factor analysis methods for studying additive and multiplicative effects in sensory panel data

    DEFF Research Database (Denmark)

    Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun

    2015-01-01

    Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...... in the use of the scale with reference to the existing structure of relationships between sensory descriptors. The multivariate assessor model will be tested on a data set from milk. Relations between the proposed model and other multiplicative models like parallel factor analysis and analysis of variance...

  20. Analysis of degree of nonlinearity and stochastic nature of HRV signal during meditation using delay vector variance method.

    Science.gov (United States)

    Reddy, L Ram Gopal; Kuntamalla, Srinivas

    2011-01-01

    Heart rate variability analysis is fast gaining acceptance as a potential non-invasive means of autonomic nervous system assessment in research as well as clinical domains. In this study, a new nonlinear analysis method is used to detect the degree of nonlinearity and stochastic nature of heart rate variability signals during two forms of meditation (Chi and Kundalini). The data obtained from an online and widely used public database (i.e., MIT/BIH physionet database), is used in this study. The method used is the delay vector variance (DVV) method, which is a unified method for detecting the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. From the results it is clear that there is a significant change in the nonlinearity and stochastic nature of the signal before and during the meditation (p value > 0.01). During Chi meditation there is a increase in stochastic nature and decrease in nonlinear nature of the signal. There is a significant decrease in the degree of nonlinearity and stochastic nature during Kundalini meditation.

  1. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  2. Statewide mesoscopic simulation for Wyoming.

    Science.gov (United States)

    2013-10-01

    This study developed a mesoscopic simulator which is capable of representing both city-level and statewide roadway : networks. The key feature of such models are the integration of (i) a traffic flow model which is efficient enough to : scale to larg...

  3. Successful Statewide Walking Program Websites

    Science.gov (United States)

    Teran, Bianca Maria; Hongu, Nobuko

    2012-01-01

    Statewide Extension walking programs are making an effort to increase physical activity levels in America. An investigation of all 20 of these programs revealed that 14 use websites as marketing and educational tools, which could prove useful as the popularity of Internet communities continues to grow. Website usability information and an analysis…

  4. Variances in family carers' quality of life based on selected relationship and caregiving indicators: A quantitative secondary analysis.

    Science.gov (United States)

    Naef, Rahel; Hediger, Hannele; Imhof, Lorenz; Mahrer-Imhof, Romy

    2017-06-01

    To determine subgroups of family carers based on family relational and caregiving variables and to explore group differences in relation to selected carer outcomes. Family caregiving in later life holds a myriad of positive and negative outcomes for family members' well-being. However, factors that constitute family carers' experience and explain variances are less well understood. A secondary data analysis using cross-sectional data from a controlled randomised trial with community-dwelling people 80 years or older and their families. A total of 277 paired data sets of older persons and their family carers were included into the analysis. Data were collected via mailed questionnaires and a geriatric nursing assessment. A two-step cluster analysis was performed to determine subgroups. To discern group differences, appropriate tests for differences with Bonferroni correction were used. Two family carer groups were identified. The low-intensity caregiver group (57% of carers) reported high relationship quality and self-perceived ease of caregiving. In contrast, the high-intensity caregiver group (43% of carers) experienced significantly lower relationship quality, felt less prepared and appraised caregiving as more difficult, time intensive and burdensome. The latter cared for older, frailer and more dependent octogenarians and had significantly lower levels of quality of life and self-perceived health compared to the low-intensity caregiver group. A combination of family relational and caregiving variables differentiates those at risk for adverse outcomes. Family carers of frailer older people tend to experience higher strain, lower relationship quality and ability to work together as a family. Nurses should explicitly assess family carer needs, in particular when older persons are frail. Family carer support interventions should address caregiving preparedness, demand and burden, as well as concerns situated in the relationship. © 2016 John Wiley & Sons Ltd.

  5. Association analysis using next-generation sequence data from publicly available control groups: the robust variance score statistic.

    Science.gov (United States)

    Derkach, Andriy; Chiang, Theodore; Gong, Jiafen; Addis, Laura; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K; Strug, Lisa J

    2014-08-01

    Sufficiently powered case-control studies with next-generation sequence (NGS) data remain prohibitively expensive for many investigators. If feasible, a more efficient strategy would be to include publicly available sequenced controls. However, these studies can be confounded by differences in sequencing platform; alignment, single nucleotide polymorphism and variant calling algorithms; read depth; and selection thresholds. Assuming one can match cases and controls on the basis of ethnicity and other potential confounding factors, and one has access to the aligned reads in both groups, we investigate the effect of systematic differences in read depth and selection threshold when comparing allele frequencies between cases and controls. We propose a novel likelihood-based method, the robust variance score (RVS), that substitutes genotype calls by their expected values given observed sequence data. We show theoretically that the RVS eliminates read depth bias in the estimation of minor allele frequency. We also demonstrate that, using simulated and real NGS data, the RVS method controls Type I error and has comparable power to the 'gold standard' analysis with the true underlying genotypes for both common and rare variants. An RVS R script and instructions can be found at strug.research.sickkids.ca, and at https://github.com/strug-lab/RVS. lisa.strug@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Evaluation of Ensiled Brewer's Grain in the Diet of Piglets by One Way Multiple Analysis of Variance, MANOVA

    Directory of Open Access Journals (Sweden)

    Amang A Mbang, J.

    2007-01-01

    Full Text Available The basic purpose of feeding trials is to find the optimum level of feed ingredients which give the highest economical returns to the farmers. This can be achieved through estimation and comparison of means of different rations. The example we have is a study of incorporation of different levels of ensiled brewers grains in the diet of 24 hybrids weaned piglets from Landrace x Duroc x Berkshire x Large White. They were randomly divided into four groups with three replicates of two piglets per pen. They were fed 0, 10, 20, 30% incorporation of ensiled brewer's grains on dry matter basis during post-weaning period followed by 0, 30, 40 and 50% during growing period and 0, 50, 60 and 70% during finishing period. We have one explanatory variable: initial weight, and four post treatment outcome variables recorded per piglets: final weight, dry matter consumption, weight gain and index of consumption. Comparing of several multivariate treatment means model design analysis is adapted. We obtain the MANOVA (Multiple Analyse of Variance table of each phase, where the treatment differences exist by using Wilk's lambda distribution, and we find the treatment effect by using a confidence interval method of MANOVA. This model has the advantage of computing the responses of all variables in the matrix of sum of squares and more precisely in separation of the different means percentage of Ensiled Brewer's grain.

  7. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  8. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  9. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis: a proposal for standardisation

    International Nuclear Information System (INIS)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; Høilund-Carlsen, Poul Flemming

    2016-01-01

    Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertainty of repeated measurements and obtain relevant repeatability coefficients (RCs) which have a unique relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed twice by the same observer to study intra-observer agreement. In study 2, 14 patients with glioma were scanned up to five times. Resulting 49 scans were assessed by three observers to examine inter-observer agreement. Outcome variables were SUVmax in study 1 and cerebral total hemispheric glycolysis (THG) in study 2. In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter-observer differences were negligible compared to differences owing to other factors; between observer 1 and 2: −10 (95 % CI: −352 to 332) and between observer 1 vs 3: 28 (95 % CI: −313 to 370). VCA is an appealing approach for weighing different sources of variation against each other, summarised as RCs. The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components. The online version of this article (doi:10

  10. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    . Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...

  11. Development of phased mission analysis program with Monte Carlo method. Improvement of the variance reduction technique with biasing towards top event

    International Nuclear Information System (INIS)

    Yang Jinan; Mihara, Takatsugu

    1998-12-01

    This report presents a variance reduction technique to estimate the reliability and availability of highly complex systems during phased mission time using the Monte Carlo simulation. In this study, we introduced the variance reduction technique with a concept of distance between the present system state and the cut set configurations. Using this technique, it becomes possible to bias the transition from the operating states to the failed states of components towards the closest cut set. Therefore a component failure can drive the system towards a cut set configuration more effectively. JNC developed the PHAMMON (Phased Mission Analysis Program with Monte Carlo Method) code which involved the two kinds of variance reduction techniques: (1) forced transition, and (2) failure biasing. However, these techniques did not guarantee an effective reduction in variance. For further improvement, a variance reduction technique incorporating the distance concept was introduced to the PHAMMON code and the numerical calculation was carried out for the different design cases of decay heat removal system in a large fast breeder reactor. Our results indicate that the technique addition of this incorporating distance concept is an effective means of further reducing the variance. (author)

  12. The median hazard ratio: a useful measure of variance and general contextual effects in multilevel survival analysis.

    Science.gov (United States)

    Austin, Peter C; Wagner, Philippe; Merlo, Juan

    2017-03-15

    Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster-specific random effects which allow one to partition the total individual variance into between-cluster variation and between-individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time-to-event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., 'frailty') Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  13. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  14. Power generation mixes evaluation applying the mean-variance theory. Analysis of the choices for Japanese energy policy

    International Nuclear Information System (INIS)

    Tabaru, Yasuhiko; Nonaka, Yuzuru; Nonaka, Shunsuke; Endou, Misao

    2013-01-01

    Optimal Japanese power generation mixes in 2030, for both economic efficiency and energy security (less cost variance risk), are evaluated by applying the mean-variance portfolio theory. Technical assumptions, including remaining generation capacity out of the present generation mix, future load duration curve, and Research and Development risks for some renewable energy technologies in 2030, are taken into consideration as either the constraints or parameters for the evaluation. Efficiency frontiers, which consist of the optimal generation mixes for several future scenarios, are identified, taking not only power balance but also capacity balance into account, and are compared with three power generation mixes submitted by the Japanese government as 'the choices for energy and environment'. (author)

  15. Under Pressure: Financial Effect of the Hospital-Acquired Conditions Initiative-A Statewide Analysis of Pressure Ulcer Development and Payment.

    Science.gov (United States)

    Meddings, Jennifer; Reichert, Heidi; Rogers, Mary A M; Hofer, Timothy P; McMahon, Laurence F; Grazier, Kyle L

    2015-07-01

    To assess the financial effect of the 2008 Hospital-Acquired Conditions Initiative (HACI) pressure ulcer payment changes on Medicare, other payers, and hospitals. Retrospective before-and-after study of all-payer statewide administrative data for more than 2.4 million annual adult discharges in 2007 and 2009 using the Healthcare Cost and Utilization Project State Inpatient Datasets for California. How often and by how much the 2008 payment changes for pressure ulcers affected hospital payment was assessed. Nonfederal acute care California hospitals (N = 311). Adults discharged from acute-care hospitals. Pressure ulcer rates and hospital payment changes. Hospital-acquired pressure ulcer rates were low in 2007 (0.28%) and 2009 (0.27%); present-on-admission pressure ulcer rates increased from 2.3% in 2007 to 3.0% in 2009. According to clinical stage of pressure ulcer (available in 2009), hospital-acquired Stage III and IV ulcers occurred in 603 discharges (0.02%); 60,244 discharges (2.42%) contained other pressure ulcer diagnoses. Payment removal for Stage III and IV hospital-acquired ulcers reduced payment in 75 (0.003%) discharges, for a statewide payment decrease of $310,444 (0.001%) for all payers and $199,238 (0.001%) for Medicare. For all other pressure ulcers, the Hospital-Acquired Conditions Initiative reduced hospital payment in 20,246 (0.81%) cases (including 18,953 cases with present-on-admission ulcers), reducing statewide payment by $62,538,586 (0.21%) for all payers and $47,237,984 (0.32%) for Medicare. The total financial effect of the 2008 payment changes for pressure ulcers was negligible. Most payment decreases occurred by removal of comorbidity payments for present-on-admission pressure ulcers other than Stages III and IV. The removal of payment for hospital-acquired Stage III and IV ulcers by implementation of the HACI policy was 1/200th that of the removal of payment for other types of pressure ulcers that occurred in implementation of the

  16. Analisis Ragam dan Peragam Bobot Badan Kambing Peranakan Etawa (ANALYSIS VARIANCE AND COVARIANCE OF BODY WEIGHT OF ETTAWA GRADE GOAT

    Directory of Open Access Journals (Sweden)

    Siti Hidayati

    2015-05-01

    Full Text Available The aims of this study were (1 to analyze the phenotypic performance of Ettawa Grade (EG goat; (2to estimate the heritability of birth weight (BW, weaning weight (WW, yearling weight (YW, and geneticcorrelation between two body weights on the third different period; and (3 to analyze the variance andcovariance component of body weight. The material used were the exiting records of 437 EG goats in BalaiPembibitan Ternak Unggul dan Hijauan Pakan Ternak Pelaihari, South Kalimantan. These goats originatedfrom the crossing between 19 males and 216 females from periods of 2009 - 2012. Nested Design methodwas used to etimate the phenotypic correlation, heritability and genetic correlation. Variance componentswere determined from heritability estimation, while covariance components were determined from geneticcerrelation estimation. Phenotypic correlation between BW and WW, between BW and YW, and betweenWW and YW were 0.19 (low; 0.31 (medium; 0.65 (high; respectively. Heritability of BW, WW, and YW were0.43±0.23 (high; WW 0.27±0.19 (medium; and YW 1.01±0.38 (excludeof the h2 value, respectively.Genetic correlation between BW and WW, between BW and YW, and between WW and YW were -0.04(negative low; 0.49 (positive medium; and -0.41 (negative medium, respectively. Variance components ofbuck, ewes, and kid for BW were 10.76%; 37.16%; and 52.09%, respectively, for WW were 6.67%; 38.52%;and 54.81%, respectively, and for YW were 25.15%; 58.37%; and 16.43%, respectively. Covariancecomponents of buck, ewes, and kid between BW and WW were -3.91%; 66.45%; and 37.46%, respectively,between BW and YW were 65.68%; 16.50%; and 17.82, and between WW and YW were -5.14%; 83.87%; and21.28%, respectively. In conclusions variance component of ewes and kid were high in body weight at birthand weaning time. Therefore, selection should be conducted for body weight at birth and weaning time.

  17. Evaluation of the mineralogical characterization of several smectite clay deposits of the state of Paraiba, Brazil using statistical analysis of variance

    International Nuclear Information System (INIS)

    Gama, A.J.A.; Menezes, R.R.; Neves, G.A.; Brito, A.L.F. de

    2015-01-01

    Currently over 80% of industrialized bentonite clay produced in Brazil in sodium form for use in various industrial applications come from the deposits in Boa Vista - PB. Recently they were discovered new bentonite deposits situated in the municipalities of Cubati - PB, Drawn Stone - PB, Sossego - PB, and last in olive groves - PB, requiring systematic studies to develop all its industrial potential. Therefore, this study aimed to evaluate chemical characterization several deposits of smectite clays from various regions of the state of Paraíba through the analysis of statistical variance. Chemical analysis form determined by fluorescence x-ray (EDX). Then analyzes were carried out of variance statistics and Tukey test using the statistical soft MINITAB® 17.0. The results showed that the chemical composition of bentonite clay of new deposits showed different amounts of silica, aluminum, magnesium and calcium in relation clays in Boa Vista, and clays imported. (author)

  18. Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.

    Science.gov (United States)

    Dexter, Franklin; Ledolter, Johannes

    2003-07-01

    Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.

  19. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean–variance approach

    International Nuclear Information System (INIS)

    Kitzing, Lena

    2014-01-01

    Different support instruments for renewable energy expose investors differently to market risks. This has implications on the attractiveness of investment. We use mean–variance portfolio analysis to identify the risk implications of two support instruments: feed-in tariffs and feed-in premiums. Using cash flow analysis, Monte Carlo simulations and mean–variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feed-in tariffs systematically require lower direct support levels than feed-in premiums while providing the same attractiveness for investment, because they expose investors to less market risk. These risk implications should be considered when designing policy schemes. - Highlights: • Mean–variance portfolio approach to analyse risk implications of policy instruments. • We show that feed-in tariffs require lower support levels than feed-in premiums. • This systematic effect stems from the lower exposure of investors to market risk. • We created a stochastic model for an exemplary offshore wind park in West Denmark. • We quantify risk-return, Sharpe Ratios and differences in required support levels

  20. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  1. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    Science.gov (United States)

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Monte Carlo Simulations Comparing Fisher Exact Test and Unequal Variances t Test for Analysis of Differences Between Groups in Brief Hospital Lengths of Stay.

    Science.gov (United States)

    Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U

    2017-12-01

    We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.

  3. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  4. Impact of Helmet Use on Injury and Financial Burden of Motorcycle and Moped Crashes in Hawai‘i: Analysis of a Linked Statewide Database

    Science.gov (United States)

    Castel, Nikki A; Wong, Linda L; Steinemann, Susan

    2016-01-01

    Helmet use reduces injury severity, disability, hospital length of stay, and hospital charges in motorcycle riders. The public absorbs billions of dollars annually in hospital charges for unhelmeted, uninsured motorcycle riders. We sought to quantify, on a statewide level, the healthcare burden of unhelmeted motorcycle and moped riders. We examined 1,965 emergency medical service (EMS) reports from motorcycle and moped crashes in Hawai‘i between 2007–2009. EMS records were linked to hospital medical records to assess associations between vehicle type, helmet use, medical charges, diagnoses, and final disposition. Unhelmeted riders of either type of vehicle suffered more head injuries, especially skull fractures (adjusted odds ratio (OR) of 4.48, P motorcycle and moped riders, with a significant (P = .006) difference between helmeted ($27,176) and unhelmeted ($40,217) motorcycle riders. Unhelmeted riders were twice as likely to self-pay (19.3%, versus 9.8% of helmeted riders), and more likely to have Medicaid or a similar income-qualifying insurance plan (13.5% versus 5.0%, respectively). Protective associations with helmet use are stronger among motorcyclists than moped riders, suggesting the protective effect is augmented in higher speed crashes. The public financial burden is higher from unhelmeted riders who sustain more severe injuries and are less likely to be insured. PMID:27980882

  5. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  6. Copy-move forgery detection through stationary wavelets and local binary pattern variance for forensic analysis in digital images.

    Science.gov (United States)

    Mahmood, Toqeer; Irtaza, Aun; Mehmood, Zahid; Tariq Mahmood, Muhammad

    2017-10-01

    The most common image tampering often for malicious purposes is to copy a region of the same image and paste to hide some other region. As both regions usually have same texture properties, therefore, this artifact is invisible for the viewers, and credibility of the image becomes questionable in proof centered applications. Hence, means are required to validate the integrity of the image and identify the tampered regions. Therefore, this study presents an efficient way of copy-move forgery detection (CMFD) through local binary pattern variance (LBPV) over the low approximation components of the stationary wavelets. CMFD technique presented in this paper is applied over the circular regions to address the possible post processing operations in a better way. The proposed technique is evaluated on CoMoFoD and Kodak lossless true color image (KLTCI) datasets in the presence of translation, flipping, blurring, rotation, scaling, color reduction, brightness change and multiple forged regions in an image. The evaluation reveals the prominence of the proposed technique compared to state of the arts. Consequently, the proposed technique can reliably be applied to detect the modified regions and the benefits can be obtained in journalism, law enforcement, judiciary, and other proof critical domains. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Intelligent Transportation Systems statewide architecture : final report.

    Science.gov (United States)

    2003-06-01

    This report describes the development of Kentuckys Statewide Intelligent Transportation Systems (ITS) Architecture. The process began with the development of an ITS Strategic Plan in 1997-2000. A Business Plan, developed in 2000-2001, translated t...

  8. WisDOT statewide customer satisfaction survey.

    Science.gov (United States)

    2013-02-01

    The purpose of this study was to develop and initiate a new customer satisfaction tool that would establish a set of baseline : departmental performance measures and be sustainable for future use. ETC Institute completed a statewide customer : survey...

  9. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  10. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    Science.gov (United States)

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  11. An alternative method for noise analysis using pixel variance as part of quality control procedures on digital mammography systems.

    NARCIS (Netherlands)

    Bouwman, R.; Young, K.; Lazzari, B.; Ravaglia, V.; Broeders, M.J.M.; Engen, R. van

    2009-01-01

    According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does

  12. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  13. An alternative method for noise analysis using pixel variance as part of quality control procedures on digital mammography systems

    International Nuclear Information System (INIS)

    Bouwman, R; Broeders, M; Van Engen, R; Young, K; Lazzari, B; Ravaglia, V

    2009-01-01

    According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does not discriminate sufficiently between systems with and without additional noise besides quantum noise. This paper attempts to give an alternative and relatively simple method for noise analysis which can divide noise into electronic noise, structured noise and quantum noise. Quantum noise needs to be the dominant noise source in clinical images for optimal performance of a digital mammography system, and therefore the amount of electronic and structured noise should be minimal. For several digital mammography systems, the noise was separated into components based on the measured pixel value, standard deviation (SD) of the image and the detector entrance dose. The results showed that differences between systems exist. Our findings confirm that the proposed method is able to discriminate systems based on their noise performance and is able to detect possible quality problems. Therefore, we suggest to replace the current method for noise analysis as described in the European Guidelines by the alternative method described in this paper.

  14. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  15. High-dimensional nested analysis of variance to assess the effect of production season, quality grade and steam pasteurization on the phenolic composition of fermented rooibos herbal tea.

    Science.gov (United States)

    Stanimirova, I; Kazura, M; de Beer, D; Joubert, E; Schulze, A E; Beelders, T; de Villiers, A; Walczak, B

    2013-10-15

    A nested analysis of variance combined with simultaneous component analysis, ASCA, was proposed to model high-dimensional chromatographic data. The data were obtained from an experiment designed to investigate the effect of production season, quality grade and post-production processing (steam pasteurization) on the phenolic content of the infusion of the popular herbal tea, rooibos, at 'cup-of-tea' strength. Specifically, a four-way analysis of variance where the experimental design involves nesting in two of the three crossed factors was considered. For the purpose of the study, batches of fermented rooibos plant material were sampled from each of four quality grades during three production seasons (2009, 2010 and 2011) and a sub-sample of each batch was steam-pasteurized. The phenolic content of each rooibos infusion was characterized by high performance liquid chromatography (HPLC)-diode array detection (DAD). In contrast to previous studies, the complete HPLC-DAD signals were used in the chemometric analysis in order to take into account the entire phenolic profile. All factors had a significant effect on the phenolic content of a 'cup-of-tea' strength rooibos infusion. In particular, infusions prepared from the grade A (highest quality) samples contained a higher content of almost all phenolic compounds than the lower quality plant material. The variations of the content of isoorientin and orientin in the different quality grade infusions over production seasons are larger than the variations in the content of aspalathin and quercetin-3-O-robinobioside. Ferulic acid can be used as an indicator of the quality of rooibos tea as its content generally decreases with increasing tea quality. Steam pasteurization decreased the content of the majority of phenolic compounds in a 'cup-of-tea' strength rooibos infusion. © 2013 Elsevier B.V. All rights reserved.

  16. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    Science.gov (United States)

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations.

  17. A Statewide Examiniation of Mental Health Courts in Illinois: Program Characteristics and Operations

    Directory of Open Access Journals (Sweden)

    Arthur J. Lurigio

    2015-07-01

    Full Text Available This study represents the only broad-based, statewide evaluation of mental health courts (MHCs conducted to date.  Data were collected from 2010 to 2013 at each of the nine active MHC programs operating in Illinois at the start of the study. The purpose of the study was to compare and contrast the adjudicatory and supervisory models of each established Illinois MHC program by utilizing a variety of research methodologies. A four-year recidivism analysis of case-level data from three Illinois MHCs was also conducted. Illinois MHCs were largely characterized by the '10 essential elements of an MHC', such as voluntary participation, informed choice and hybrid team approaches to case manage clients. Results of the recidivism analysis suggest that MHCs compare favorably to other types of probation. Overall, findings revealed that Illinois MHCs are delivering services effectively and efficiently in a well-coordinated, client-centered team approach. Differences found among the MHCs are not evidence of significant variance from the model, and instead represent responsiveness to the unique culture of the court, the niche-filling character of the program, the expectations of the program stakeholders and the nature and extent of the local service environment.

  18. Semi-empirical prediction of moisture build-up in an electronic enclosure using analysis of variance (ANOVA)

    DEFF Research Database (Denmark)

    Shojaee Nasirabadi, Parizad; Conseil, Helene; Mohanty, Sankhya

    2016-01-01

    Electronic systems are exposed to harsh environmental conditions such as high humidity in many applications. Moisture transfer into electronic enclosures and condensation can cause several problems as material degradation and corrosion. Therefore, it is important to control the moisture content...... and the relative humidity inside electronic enclosures. In this work, moisture transfer into a typical polycarbonate electronic enclosure with a cylindrical shape opening is studied. The effects of four influential parameters namely, initial relative humidity inside the enclosure, radius and length of the opening...... and temperature are studied. A set of experiments are done based on a fractional factorial design in order to estimate the time constant for moisture transfer into the enclosure by fitting the experimental data to an analytical quasi-steady-state model. According to the statistical analysis, temperature...

  19. Design of a statewide radiation survey

    International Nuclear Information System (INIS)

    Nagda, N.L.; Koontz, M.D.; Rector, H.E.; Nifong, G.D.

    1989-01-01

    The Florida Institute of Phosphate Research (FIPR) recently sponsored a statewide survey to identify all significant land areas in Florida where the state's environmental radiation rule should be applied. Under this rule, newly constructed buildings must be tested for radiation levels unless approved construction techniques are used. Two parallel surveys - a land-based survey and a population-based survey - were designed and conducted to address the objective. Each survey included measurements in more than 3000 residences throughout the state. Other information sources that existed at the outset of the study, such as geologic profiles mapped by previous investigators and terrestrial uranium levels characterized through aerial gamma radiation surveys, were also examined. Initial data analysis efforts focused on determining the extent of evidence of radon potential for each of 67 counties in the state. Within 18 countries that were determined to have definite evidence of elevated radon potential, more detailed spatial analyses were conducted to identify areas of which the rule should apply. A total of 74 quadrangles delineated by the U.S. Geological Survey, representing about 7% of those constituting the state, were identified as having elevated radon potential and being subject to the rule

  20. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  1. Analisis Portofolio Syariah Optimal Menggunakan Model Mean Variance Efficient Portofolio (MVEP Dengan Pendekatan Data Envelopment Analysis (DEA

    Directory of Open Access Journals (Sweden)

    Laeli Nurani

    2016-04-01

    Full Text Available Penelitian ini membahas analisis portofolio syariah optimum menggunakan model Mean-Variannce Efficient Portofolio (MVEP yang proses pemilihan sahamnya menggunakan Data Envelopment Analysis (DEA dengan menggunakan kendala input (Standar Deviasi, Debt Earning Ratio, Book Value Share, Price Book Value Ratio dan kendala output (Return, Earning Per Share, Return On Equity, Return On Asset, Net Profit Margin, Price Earning Ratio. Data yang digunakan dalam Tugas Akhir ini adalah saham-saham yang terdaftar di Jakarta Islamic Index (JII periode 27 Juni 2014 – 18 Februari 2016. Hasil uji efisiensi dengan DEA-CCR dan DEA-BCC diperoleh 14 saham terpilih sebagai kandidat pembentuk portofolio, yaitu: ADRO, ASRI, BSDE INDF, INTP, ITMG, KLBF, LPKR, LSIP, PGAS, SMGR, SMRA, TLKM, dan UNVR. Dari ke-14 saham tersebut diperoleh 4 saham optimal dengan besar dana yang harus diinvestasikan pada masing-masing saham yaitu: TLKM (52%, UNVR (7%, LPKR (17% dan INDF (24% dengan ekspektasi keuntungan sebesar 0,000646 (0,06% resiko sebesar 0,01389 (1,4%.

  2. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  3. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

    2009-08-15

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  4. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    International Nuclear Information System (INIS)

    Conte, Elio; Federici, Antonio; Zbilut, Joseph P.

    2009-01-01

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  5. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  6. Tobacco Smoke Pollution in Hospitality Venues Before and After Passage of Statewide Smoke-Free Legislation.

    Science.gov (United States)

    Buettner-Schmidt, Kelly; Boursaw, Blake; Lobo, Marie L; Travers, Mark J

    2017-03-01

    In 2012, North Dakota enacted a comprehensive statewide law prohibiting smoking in enclosed public places. Disparities in tobacco control exist in rural areas. This study's objective was to determine the extent to which the passage of a comprehensive, statewide, smoke-free law in a predominantly rural state influenced tobacco smoke pollution in rural and nonrural venues. A longitudinal cohort design study comparing the levels of tobacco smoke pollution before and after passage of the statewide smoke-free law was conducted in 64 restaurants and bars statewide in North Dakota. Particulate matter with a median aerodynamic diameter of <2.5 μm (a valid atmospheric marker of tobacco smoke pollution) was assessed. A significant 83% reduction in tobacco smoke pollution levels occurred after passage of the law. Significant reductions in tobacco smoke pollution levels occurred in each of the rural categories; however, no difference by rurality was noted in the analysis after passage of the law, in contrast to the study before passage. To our knowledge, this was the largest, single, rural postlaw study globally. A comprehensive statewide smoke-free law implemented in North Dakota dramatically decreased the level of tobacco smoke pollution in bars and restaurants. © 2016 The Authors. Public Health Nursing Published by Wiley Periodicals, Inc.

  7. Louisiana motorcycle fatalities in the wake of governmentally implemented change: a retrospective analysis of the motorcycle morbidity and mortality before, during, and after the repeal of a statewide helmet mandate.

    Science.gov (United States)

    Strom, Shane F; Ambekar, Sudheer; Madhugiri, Venkatesh S; Nanda, Anil

    2013-06-01

    On August 15, 2004, Louisiana's universal motorcycle helmet mandate was reinstated. Previous studies have shown that mortality and morbidity of motorcycle riders who crashed had increased during the 5 years the mandate was repealed. The objective of this study was to discern whether the reinstatement of the universal helmet mandate has resulted in a subsequent decrease in motorcycle-related mortality and morbidity in the state of Louisiana. A retrospective analysis was performed observing the regularity of helmet use and the associated morbidity and mortality of motorcycle traffic accidents from the time before, during, and after the universal motorcycle helmet mandate was repealed in the state of Louisiana. Fatality statistics were obtained through the National Highway Safety Traffic Association. Injury, helmet use, and collision data were obtained from the Louisiana Highway Safety Commission. Motorcycle registration data were obtained from the Federal Highway Administration. Motorcycle crash-related fatalities increased significantly when the statewide helmet mandate was repealed, and interestingly, after reinstatement, these fatality rates never returned to their previous lows. Motorcycle fatalities have increased out of proportion to the increase in motorbike registrations, even when yearly fatalities are normalized to fatalities per 10,000 registered bikes. An all-time high in fatalities was seen in 2006, a year subsequent to the mandate's reinstatement. Fatalities per collision were elevated significantly after the mandate's repeal but did not return to prerepeal lows after the mandate's reinstatement. Although helmet use after reinstatement has reached all-time highs, fatality rates have remained elevated since the original mandate repeal in 1999. Other achievable changes in state policy and law enforcement should be explored to quell this heightened risk to motorcycle enthusiasts in Louisiana, and states considering changing their own motorcycle helmet

  8. Body Composition Explains Greater Variance in Weight-for-Length Z-scores than Mid-Upper Arm Circumference during Infancy - A Secondary Data Analysis

    International Nuclear Information System (INIS)

    Grijalva-Eternod, Carlos; Andersen, Gregers Stig; Girma, Tsinuel; Admassu, Bitiya; Kæstel, Pernille; Michaelsen, Kim F; Friis, Henrik; Wells, Jonathan CK

    2014-01-01

    Full text: Background: Traditionally, weight-for-length/height z-score (WLZ) was used to assess wasting (a mortality risk factor) in children 0-59 months. A recent consultation reached a majority position that mid-upper arm circumference (MUAC) is a better mortality predictor than WLZ in children 6-59 months. In addition, MUAC collected at the ages of 6-14 weeks has shown to identify infants more likely to die before reaching one year. To understand which body compartment is most affected by undernutrition, associations between anthropometric indicators and body composition data have been studied in children aged 6-59 months. To our knowledge, no such study has been done in children aged 0-6 months. We aimed to study these associations. Methods: Weight, length, MUAC, and lean and fat mass (LM and FM, respectively) obtained by air-displacement plethysmography of infants aged 0-6 months were obtained from an Ethiopian birth cohort study. The data, originally used to construct body composition reference data, measured infants at birth, 1.5, 2.5, 3.5, 4.5, and 6 months of age. A complete set of measurements available for 2506 out of a total of 2777 child measurements (563/598, 403/436, 414/444, 413/446, 368/415, and 345/441 in each age, respectively) was used for this analys, E-mail: is. Weight and length data were transformed to sex-and age-specific weight-for-length z-score (WLZ) values using the 2006 WHO growth standards. To remove the confounding positive association between LM or FM and length, we calculated sex- and age-specific standardised residuals values obtained from regressing LM or FM on length, separately by sex and age of measurement. The associations between MUAC, WLZ, length, and body composition residuals were assessed using correlation analysis. We used regression analysis to assess the independent contribution of body composition residuals to MUAC and WLZ. All analyses were done separately by age. Results: MUAC was strongly and positively correlated

  9. Advanced methods of analysis variance on scenarios of nuclear prospective; Metodos avanzados de analisis de varianza en escenarios de prospectiva nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-07-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  10. Management accounting approach to analyse energy related CO2 emission: A variance analysis study of top 10 emitters of the world

    International Nuclear Information System (INIS)

    Pani, Ratnakar; Mukhopadhyay, Ujjaini

    2013-01-01

    The paper undertakes a decomposition study of carbon dioxide emission of the top ten emitting countries over the period 1980–2007 using variance analysis method, with the objectives of examining the relative importance of the major determining factors, the role of energy structure and impact of liberalisation on emission and exploring the possibilities of arresting emission with simultaneous rise in population and income. The major findings indicate that although rising income and population are the main driving forces, they are neither necessary nor sufficient for increasing emission, rather energy structure and emission intensities are the crucial determinants, pointing towards the fact that a country with higher income and population with proper energy policy may be a low emitter and vice-versa. Since modern energy-intensive production limits the scope of reduction in total energy use, it is necessary to decouple the quantum of energy use from emission through technological upgradation. The results indicate that liberalisation resulted in higher emission. The paper attempts to illustrate the required adjustments in energy structure and suggests necessary policy prescriptions.

  11. SGS Analysis of the Evolution Equations of the Mixture Fraction and the Progress Variable Variances in the Presence of Spray Combustion

    Directory of Open Access Journals (Sweden)

    H. Meftah

    2010-03-01

    Full Text Available In this paper, direct numerical simulation databases have been generated to analyze the impact of the propagation of a spray flame on several subgrid scales (SGS models dedicated to the closure of the transport equations of the subgrid fluctuations of the mixture fraction Z and the progress variable c. Computations have been carried out starting from a previous inert database [22] where a cold flame has been ignited in the center of the mixture when the droplet segregation and evaporation rate were at their highest levels. First, a RANS analysis has shown a brutal increase of the mixture fraction fluctuations due to the fuel consumption by the flame. Indeed, local vapour mass fraction reaches then a minimum value, far from the saturation level. It leads to a strong increase of the evaporation rate, which is also accompanied by a diminution of the oxidiser level. In a second part of this paper, a detailed evaluation of the subgrid models allowing to close the variance and the dissipation rates of the mixture fraction and the progress variable has been carried out. Models that have been selected for their efficiency in inert flows have shown a very good behaviour in the framework of reactive flows.

  12. Parametric study and global sensitivity analysis for co-pyrolysis of rape straw and waste tire via variance-based decomposition.

    Science.gov (United States)

    Xu, Li; Jiang, Yong; Qiu, Rong

    2018-01-01

    In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Application of an iterative methodology for cross-section and variance/covariance data adjustment to the analysis of fast spectrum systems accounting for non-linearity

    International Nuclear Information System (INIS)

    Pelloni, Sandro

    2014-01-01

    several times until convergence is reached for the analytical values and their uncertainties. An important result of the study is that the asymptotic analytical values of the integral parameters are closer to the experimental values as compared to the standard first adjustment results. Moreover, the asymptotic analytical values seem rather independent of the specific a priori variance/covariance data used in the analysis, namely COMMARA-2.0 or BOLNA, despite different a priori analytical values respectively obtained with JEFF-3.1 or ENDF/B-VI.8 data. The asymptotic uncertainties obtained on the basis of the two libraries are also similar

  14. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  15. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

    International Nuclear Information System (INIS)

    Ezzati, A.O.; Sohrabpour, M.

    2013-01-01

    In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000. -- Highlights: ► The efficiency of APR and APRS methods was compared to two tallying methods. ► The APRS is more efficient than the APR method in track length estimator tallies. ► In the energy deposition tally, both methods have nearly the same efficiency. ► Variance reduction factors of these methods are position and energy dependent.

  16. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  17. Variations in Patterns of Utilization and Charges for the Care of Headache in North Carolina, 2000-2009: A Statewide Claims' Data Analysis.

    Science.gov (United States)

    Hurwitz, Eric L; Vassilaki, Maria; Li, Dongmei; Schneider, Michael J; Stevans, Joel M; Phillips, Reed B; Phelan, Shawn P; Lewis, Eugene A; Armstrong, Richard C

    2016-05-01

    The purpose of the study was to compare patterns of utilization and charges generated by medical doctors (MDs), doctors of chiropractic (DCs), and physical therapists (PTs) for the treatment of headache in North Carolina. Retrospective analysis of claims data from the North Carolina State Health Plan for Teachers and State Employees from 2000 to 2009. Data were extracted from Blue Cross Blue Shield of North Carolina for the North Carolina State Health Plan using International Classification of Diseases, Ninth Revision, diagnostic codes for headache. The claims were separated by individual provider type, combination of provider types, and referral patterns. The majority of patients and claims were in the MD-only or MD plus referral patterns. Chiropractic patterns represented less than 10% of patients. Care patterns with single-provider types and no referrals incurred the least charges on average for headache. When care did not include referral providers or services, MD with DC care was generally less expensive than MD care with PT. However, when combined with referral care, MD care with PT was generally less expensive. Compared with MD-only care, risk-adjusted charges (available 2006-2009) for patients in the middle risk quintile were significantly less for DC-only care. Utilization and expenditures for headache treatment increased from 2000 to 2009 across all provider groups. MD care represented the majority of total allowed charges in this study. MD care and DC care, alone or in combination, were overall the least expensive patterns of headache care. Risk-adjusted charges were significantly less for DC-only care. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  18. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  19. [Cointegration test and variance decomposition for the relationship between economy and environment based on material flow analysis in Tangshan City Hebei China].

    Science.gov (United States)

    2015-12-01

    The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution.

  20. GEOGRAPHIC DISTRIBUTION OF MOLECULAR VARIANCE WITHIN THE BLUE MARLIN (MAKAIRA NIGRICANS): A HIERARCHICAL ANALYSIS OF ALLOZYME, SINGLE-COPY NUCLEAR DNA, AND MITOCHONDRIAL DNA MARKERS.

    Science.gov (United States)

    Buonaccorsi, Vincent P; Reece, Kimberly S; Morgan, Lee W; Graves, John E

    1999-04-01

    This study presents a comparative hierarchical analysis of variance applied to three classes of molecular markers within the blue marlin (Makaira nigricans). Results are reported from analyses of four polymorphic allozyme loci, four polymorphic anonymously chosen single-copy nuclear DNA (scnDNA) loci, and previously reported restriction fragment length polymorphisms (RFLPs) of mitochondrial DNA (mtDNA). Samples were collected within and among the Atlantic and Pacific Oceans over a period of several years. Although moderate levels of genetic variation were detected at both polymorphic allozyme (H = 0.30) and scnDNA loci (H = 0.37), mtDNA markers were much more diverse (h = 0.85). Allele frequencies were significantly different between Atlantic and Pacific Ocean samples at three of four allozyme loci and three of four scnDNA loci. Estimates of allozyme genetic differentiation (θ O ) ranged from 0.00 to 0.15, with a mean of 0.08. The θ O values for scnDNA loci were similar to those of allozymes, ranging from 0.00 to 0.12 with a mean of 0.09. MtDNA RFLP divergence between oceans (θ O = 0.39) was significantly greater than divergence detected at nuclear loci (95% nuclear confidence interval = 0.04-0.11). The fourfold smaller effective population size of mtDNA and male-mediated gene flow may account for the difference observed between nuclear and mitochondrial divergence estimates. © 1999 The Society for the Study of Evolution.

  1. Structure and stability of genetic variance-covariance matrices: A Bayesian sparse factor analysis of transcriptional variation in the three-spined stickleback.

    Science.gov (United States)

    Siren, J; Ovaskainen, O; Merilä, J

    2017-10-01

    The genetic variance-covariance matrix (G) is a quantity of central importance in evolutionary biology due to its influence on the rate and direction of multivariate evolution. However, the predictive power of empirically estimated G-matrices is limited for two reasons. First, phenotypes are high-dimensional, whereas traditional statistical methods are tuned to estimate and analyse low-dimensional matrices. Second, the stability of G to environmental effects and over time remains poorly understood. Using Bayesian sparse factor analysis (BSFG) designed to estimate high-dimensional G-matrices, we analysed levels variation and covariation in 10,527 expressed genes in a large (n = 563) half-sib breeding design of three-spined sticklebacks subject to two temperature treatments. We found significant differences in the structure of G between the treatments: heritabilities and evolvabilities were higher in the warm than in the low-temperature treatment, suggesting more and faster opportunity to evolve in warm (stressful) conditions. Furthermore, comparison of G and its phenotypic equivalent P revealed the latter is a poor substitute of the former. Most strikingly, the results suggest that the expected impact of G on evolvability-as well as the similarity among G-matrices-may depend strongly on the number of traits included into analyses. In our results, the inclusion of only few traits in the analyses leads to underestimation in the differences between the G-matrices and their predicted impacts on evolution. While the results highlight the challenges involved in estimating G, they also illustrate that by enabling the estimation of large G-matrices, the BSFG method can improve predicted evolutionary responses to selection. © 2017 John Wiley & Sons Ltd.

  2. Clustering with position-specific constraints on variance: Applying redescending M-estimators to label-free LC-MS data analysis

    Directory of Open Access Journals (Sweden)

    Mani D R

    2011-08-01

    Full Text Available Abstract Background Clustering is a widely applicable pattern recognition method for discovering groups of similar observations in data. While there are a large variety of clustering algorithms, very few of these can enforce constraints on the variation of attributes for data points included in a given cluster. In particular, a clustering algorithm that can limit variation within a cluster according to that cluster's position (centroid location can produce effective and optimal results in many important applications ranging from clustering of silicon pixels or calorimeter cells in high-energy physics to label-free liquid chromatography based mass spectrometry (LC-MS data analysis in proteomics and metabolomics. Results We present MEDEA (M-Estimator with DEterministic Annealing, an M-estimator based, new unsupervised algorithm that is designed to enforce position-specific constraints on variance during the clustering process. The utility of MEDEA is demonstrated by applying it to the problem of "peak matching"--identifying the common LC-MS peaks across multiple samples--in proteomic biomarker discovery. Using real-life datasets, we show that MEDEA not only outperforms current state-of-the-art model-based clustering methods, but also results in an implementation that is significantly more efficient, and hence applicable to much larger LC-MS data sets. Conclusions MEDEA is an effective and efficient solution to the problem of peak matching in label-free LC-MS data. The program implementing the MEDEA algorithm, including datasets, clustering results, and supplementary information is available from the author website at http://www.hephy.at/user/fru/medea/.

  3. California statewide model for high-speed rail

    OpenAIRE

    Outwater, Maren; Tierney, Kevin; Bradley, Mark; Sall, Elizabeth; Kuppam, Arun; Modugala, Vamsee

    2010-01-01

    The California High Speed Rail Authority (CHSRA) and the Metropolitan Transportation Commission (MTC) have developed a new statewide model to support evaluation of high-speed rail alternatives in the State of California. This statewide model will also support future planning activities of the California Department of Transportation (Caltrans). The approach to this statewide model explicitly recognizes the unique characteristics of intraregional travel demand and interregional travel demand. A...

  4. Statewide analysis of bicycle crashes : [project summary].

    Science.gov (United States)

    2017-06-01

    An extensive literature review was conducted to locate existing studies in four areas: (1) risk factors that affect the frequency and severity of bicycle crashes; (2) bicycle crash causes, patterns, and contributing factors; (3) network screening met...

  5. Noise variance analysis using a flat panel x-ray detector: A method for additive noise assessment with application to breast CT applications

    Energy Technology Data Exchange (ETDEWEB)

    Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M. [Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States); Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States) and Department of Biomedical Engineering, University of California, Davis, Davis, California, 95616 (United States)

    2010-07-15

    Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.

  6. Effects of Road Salt on Connecticut's Groundwater: A Statewide Centennial Perspective.

    Science.gov (United States)

    Cassanelli, James P; Robbins, Gary A

    2013-01-01

    This study examined the extent to which development and road salting has affected Connecticut's groundwater. We gathered water quality data from different time periods between 1894 and the present and analyzed the data using maps generated with ESRI ArcGIS. Historical reports illustrate a statewide baseline trend of decreasing chloride concentration northward across the State (average, 2 ppm). Since then, statewide chloride concentrations in ground water have increased by more than an order of magnitude on average. Analysis indicates spatial correlation between chloride impacts and major roadways. Furthermore, increases in statewide chloride concentration parallel increases in road salt application. Projected trends suggest that statewide baseline concentrations will increase by an amount equal to five times background levels between the present and the year 2030. The analytical process outlined herein can be readily applied to any region to investigate salt impacts on large spatial and temporal scales. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  7. Analysis of variance of primary data on plant growth analysis Análise de variância dos dados primários na análise de crescimento vegetal

    Directory of Open Access Journals (Sweden)

    Adelson Paulo Araújo

    2003-01-01

    Full Text Available Plant growth analysis presents difficulties related to statistical comparison of growth rates, and the analysis of variance of primary data could guide the interpretation of results. The objective of this work was to evaluate the analysis of variance of data from distinct harvests of an experiment, focusing especially on the homogeneity of variances and the choice of an adequate ANOVA model. Data from five experiments covering different crops and growth conditions were used. From the total number of variables, 19% were originally homoscedastic, 60% became homoscedastic after logarithmic transformation, and 21% remained heteroscedastic after transformation. Data transformation did not affect the F test in one experiment, whereas in the other experiments transformation modified the F test usually reducing the number of significant effects. Even when transformation has not altered the F test, mean comparisons led to divergent interpretations. The mixed ANOVA model, considering harvest as a random effect, reduced the number of significant effects of every factor which had the F test modified by this model. Examples illustrated that analysis of variance of primary variables provides a tool for identifying significant differences in growth rates. The analysis of variance imposes restrictions to experimental design thereby eliminating some advantages of the functional growth analysis.A análise de crescimento vegetal apresenta dificuldades relacionadas à comparação estatística das curvas de crescimento, e a análise de variância dos dados primários pode orientar a interpretação dos resultados. Este trabalho objetivou avaliar a análise de variância de dados de distintas coletas de um experimento, abordando particularmente a homogeneidade das variâncias e a escolha do modelo adequado de ANOVA. Foram utilizados dados de cinco experimentos com diferentes culturas e condições de crescimento. Do total de variáveis, 19% foram originalmente

  8. Water Quality attainment Information from Clean Water Act Statewide Statistical Surveys

    Data.gov (United States)

    U.S. Environmental Protection Agency — Designated uses assessed by statewide statistical surveys and their state and national attainment categories. Statewide statistical surveys are water quality...

  9. Water Quality Stressor Information from Clean Water Act Statewide Statistical Surveys

    Data.gov (United States)

    U.S. Environmental Protection Agency — Stressors assessed by statewide statistical surveys and their state and national attainment categories. Statewide statistical surveys are water quality assessments...

  10. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  11. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  12. 49 CFR 613.200 - Statewide transportation planning and programming.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Statewide transportation planning and programming. 613.200 Section 613.200 Transportation Other Regulations Relating to Transportation (Continued... Transportation Planning and Programming § 613.200 Statewide transportation planning and programming. The...

  13. Evaluation of the mineralogical characterization of several smectite clay deposits of the state of Paraiba, Brazil using statistical analysis of variance; Avaliacao da caracterizacao mineralogica de diversos depositos de argilas esmectiticas do estado da Paraiba utilizando analise estatistica de variancia

    Energy Technology Data Exchange (ETDEWEB)

    Gama, A.J.A.; Menezes, R.R.; Neves, G.A.; Brito, A.L.F. de, E-mail: agama@reitoria.ufcg.edu.br [Universidade Federal de Campina Grande (UFCG), PB (Brazil)

    2015-07-01

    Currently over 80% of industrialized bentonite clay produced in Brazil in sodium form for use in various industrial applications come from the deposits in Boa Vista - PB. Recently they were discovered new bentonite deposits situated in the municipalities of Cubati - PB, Drawn Stone - PB, Sossego - PB, and last in olive groves - PB, requiring systematic studies to develop all its industrial potential. Therefore, this study aimed to evaluate chemical characterization several deposits of smectite clays from various regions of the state of Paraíba through the analysis of statistical variance. Chemical analysis form determined by fluorescence x-ray (EDX). Then analyzes were carried out of variance statistics and Tukey test using the statistical soft MINITAB® 17.0. The results showed that the chemical composition of bentonite clay of new deposits showed different amounts of silica, aluminum, magnesium and calcium in relation clays in Boa Vista, and clays imported. (author)

  14. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  15. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  16. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  17. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  18. Variance in total levels of phospholipase C zeta (PLC-ζ) in human sperm may limit the applicability of quantitative immunofluorescent analysis as a diagnostic indicator of oocyte activation capability.

    Science.gov (United States)

    Kashir, Junaid; Jones, Celine; Mounce, Ginny; Ramadan, Walaa M; Lemmon, Bernadette; Heindryckx, Bjorn; de Sutter, Petra; Parrington, John; Turner, Karen; Child, Tim; McVeigh, Enda; Coward, Kevin

    2013-01-01

    To examine whether similar levels of phospholipase C zeta (PLC-ζ) protein are present in sperm from men whose ejaculates resulted in normal oocyte activation, and to examine whether a predominant pattern of PLC-ζ localization is linked to normal oocyte activation ability. Laboratory study. University laboratory. Control subjects (men with proven oocyte activation capacity; n = 16) and men whose sperm resulted in recurrent intracytoplasmic sperm injection failure (oocyte activation deficient [OAD]; n = 5). Quantitative immunofluorescent analysis of PLC-ζ protein in human sperm. Total levels of PLC-ζ fluorescence, proportions of sperm exhibiting PLC-ζ immunoreactivity, and proportions of PLC-ζ localization patterns in sperm from control and OAD men. Sperm from control subjects presented a significantly higher proportion of sperm exhibiting PLC-ζ immunofluorescence compared with infertile men diagnosed with OAD (82.6% and 27.4%, respectively). Total levels of PLC-ζ in sperm from individual control and OAD patients exhibited significant variance, with sperm from 10 out of 16 (62.5%) exhibiting levels similar to OAD samples. Predominant PLC-ζ localization patterns varied between control and OAD samples with no predictable or consistent pattern. The results indicate that sperm from control men exhibited significant variance in total levels of PLC-ζ protein, as well as significant variance in the predominant localization pattern. Such variance may hinder the diagnostic application of quantitative PLC-ζ immunofluorescent analysis. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  19. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  20. Statistical test of reproducibility and operator variance in thin-section modal analysis of textures and phenocrysts in the Topopah Spring member, drill hole USW VH-2, Crater Flat, Nye County, Nevada

    International Nuclear Information System (INIS)

    Moore, L.M.; Byers, F.M. Jr.; Broxton, D.E.

    1989-06-01

    A thin-section operator-variance test was given to the 2 junior authors, petrographers, by the senior author, a statistician, using 16 thin sections cut from core plugs drilled by the US Geological Survey from drill hole USW VH-2 standard (HCQ) drill core. The thin sections are samples of Topopah Spring devitrified rhyolite tuff from four textural zones, in ascending order: (1) lower nonlithophysal, (2) lower lithopysal, (3) middle nonlithophysal, and (4) upper lithophysal. Drill hole USW-VH-2 is near the center of the Crater Flat, about 6 miles WSW of the Yucca Mountain in Exploration Block. The original thin-section labels were opaqued out with removable enamel and renumbered with alpha-numeric labels. The sliders were then given to the petrographer operators for quantitative thin-section modal (point-count) analysis of cryptocrystalline, spherulitic, granophyric, and void textures, as well as phenocryst minerals. Between operator variance was tested by giving the two petrographers the same slide, and within-operator variance was tested by the same operator the same slide to count in a second test set, administered at least three months after the first set. Both operators were unaware that they were receiving the same slide to recount. 14 figs., 6 tabs

  1. A quantitative method to track protein translocation between intracellular compartments in real-time in live cells using weighted local variance image analysis.

    Directory of Open Access Journals (Sweden)

    Guillaume Calmettes

    Full Text Available The genetic expression of cloned fluorescent proteins coupled to time-lapse fluorescence microscopy has opened the door to the direct visualization of a wide range of molecular interactions in living cells. In particular, the dynamic translocation of proteins can now be explored in real time at the single-cell level. Here we propose a reliable, easy-to-implement, quantitative image processing method to assess protein translocation in living cells based on the computation of spatial variance maps of time-lapse images. The method is first illustrated and validated on simulated images of a fluorescently-labeled protein translocating from mitochondria to cytoplasm, and then applied to experimental data obtained with fluorescently-labeled hexokinase 2 in different cell types imaged by regular or confocal microscopy. The method was found to be robust with respect to cell morphology changes and mitochondrial dynamics (fusion, fission, movement during the time-lapse imaging. Its ease of implementation should facilitate its application to a broad spectrum of time-lapse imaging studies.

  2. Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues

    International Nuclear Information System (INIS)

    Yang, M; Zhu, X R; Mohan, R; Dong, L; Virshup, G; Clayton, J

    2010-01-01

    We discovered an empirical relationship between the logarithm of mean excitation energy (ln I m ) and the effective atomic number (EAN) of human tissues, which allows for computing patient-specific proton stopping power ratios (SPRs) using dual-energy CT (DECT) imaging. The accuracy of the DECT method was evaluated for 'standard' human tissues as well as their variance. The DECT method was compared to the existing standard clinical practice-a procedure introduced by Schneider et al at the Paul Scherrer Institute (the stoichiometric calibration method). In this simulation study, SPRs were derived from calculated CT numbers of known material compositions, rather than from measurement. For standard human tissues, both methods achieved good accuracy with the root-mean-square (RMS) error well below 1%. For human tissues with small perturbations from standard human tissue compositions, the DECT method was shown to be less sensitive than the stoichiometric calibration method. The RMS error remained below 1% for most cases using the DECT method, which implies that the DECT method might be more suitable for measuring patient-specific tissue compositions to improve the accuracy of treatment planning for charged particle therapy. In this study, the effects of CT imaging artifacts due to the beam hardening effect, scatter, noise, patient movement, etc were not analyzed. The true potential of the DECT method achieved in theoretical conditions may not be fully achievable in clinical settings. Further research and development may be needed to take advantage of the DECT method to characterize individual human tissues.

  3. Statewide Transportation Engineering Warehouse for Archived Regional Data (STEWARD).

    Science.gov (United States)

    2009-12-01

    This report documents Phase III of the development and operation of a prototype for the Statewide Transportation : Engineering Warehouse for Archived Regional Data (STEWARD). It reflects the progress on the development and : operation of STEWARD sinc...

  4. Data integration for statewide transportation planning : final report

    Science.gov (United States)

    2009-08-01

    The goal of this study was to investigate the data availability, accessibility, and interoperability issues arisen from the statewide : transportation planning activities undertaken at WisDOT and to identify possible approaches for addressing these i...

  5. WisDOT statewide customer satisfaction survey : [project brief].

    Science.gov (United States)

    2013-03-01

    The Wisconsin Department of Transportation (WisDOT) is a major public agency with numerous customers utilizing a variety of services and programs to support the entire statewide multimodal transportation system. The department also houses the Divisio...

  6. Statewide and Metropolitan Transportation Planning Processes : a TPCB Peer Exchange

    Science.gov (United States)

    2016-04-20

    This report highlights key recommendations and noteworthy practices identified at Statewide and Metropolitan Transportation Planning Processes Peer Exchange held on September 9-10, 2015 in Shepherdstown, West Virginia. This event was sponsored ...

  7. Nebraska Statewide Wind Integration Study: April 2008 - January 2010

    Energy Technology Data Exchange (ETDEWEB)

    EnerNex Corporation, Knoxville, Tennessee; Ventyx, Atlanta, Georgia; Nebraska Power Association, Lincoln, Nebraska

    2010-03-01

    Wind generation resources in Nebraska will play an increasingly important role in the environmental and energy security solutions for the state and the nation. In this context, the Nebraska Power Association conducted a state-wide wind integration study.

  8. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  9. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  11. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  12. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  13. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  14. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  15. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  16. A Statewide Partnership for Implementing Inquiry Science

    Science.gov (United States)

    Lytle, Charles

    The North Carolina Infrastructure for Science Education (NC-ISE) is a statewide partnership for implementing standards-based inquiry science using exemplary curriculum materials in the public schools of North Carolina. North Carolina is the 11th most populous state in the USA with 8,000,000 residents, 117 school districts and a geographic area of 48,718 miles. NC-ISE partners include the state education agency, local school systems, three branches of the University of North Carolina, the state mathematics and science education network, businesses, and business groups. The partnership, based upon the Science for All Children model developed by the National Science Resources Centre, was initiated in 1997 for improvement in teaching and learning of science and mathematics. This research-based model has been successfully implemented in several American states during the past decade. Where effectively implemented, the model has led to significant improvements in student interest and student learning. It has also helped reduce the achievement gap between minority and non-minority students and among students from different economic levels. A key program element of the program is an annual Leadership Institute that helps teams of administrators and teachers develop a five-year strategic plan for their local systems. Currently 33 of the117 local school systems have joined the NC-ISE Program and are in various stages of implementation of inquiry science in grades K-8.

  17. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  18. Statewide Groundwater Recharge Modeling in New Mexico

    Science.gov (United States)

    Xu, F.; Cadol, D.; Newton, B. T.; Phillips, F. M.

    2017-12-01

    It is crucial to understand the rate and distribution of groundwater recharge in New Mexico because it not only largely defines a limit for water availability in this semi-arid state, but also is the least understood aspect of the state's water budget. With the goal of estimating groundwater recharge statewide, we are developing the Evapotranspiration and Recharge Model (ETRM), which uses existing spatial datasets to model the daily soil water balance over the state at a resolution of 250 m cell. The input datasets includes PRISM precipitation data, MODIS Normalized Difference Vegetation Index (NDVI), NRCS soils data, state geology data and reference ET estimates produced by Gridded Atmospheric Data downscalinG and Evapotranspiration Tools (GADGET). The current estimated recharge presents diffuse recharge only, not focused recharge as in channels or playas. Direct recharge measurements are challenging and rare, therefore we estimate diffuse recharge using a water balance approach. The ETRM simulated runoff amount was compared with USGS gauged discharge in four selected ephemeral channels: Mogollon Creek, Zuni River, the Rio Puerco above Bernardo, and the Rio Puerco above Arroyo Chico. Result showed that focused recharge is important, and basin characteristics can be linked with watershed hydrological response. As the sparse instruments in NM provide limited help in improving estimation of focused recharge by linking basin characteristics, the Walnut Gulch Experimental Watershed, which is one of the most densely gauged and monitored semiarid rangeland watershed for hydrology research purpose, is now being modeled with ETRM. Higher spatial resolution of field data is expected to enable detailed comparison of model recharge results with measured transmission losses in ephemeral channels. The final ETRM product will establish an algorithm to estimate the groundwater recharge as a water budget component of the entire state of New Mexico. Reference ET estimated by GADGET

  19. Femoral anteversion and tibial torsion only explain 25% of variance in regression analysis of foot progression angle in children with diplegic cerebral palsy

    Science.gov (United States)

    2013-01-01

    Background The relationship between torsional bony deformities and rotational gait parameters has not been sufficiently investigated. This study was to investigate the degree of contribution of torsional bony deformities to rotational gait parameters in patients with diplegic cerebral palsy (CP). Methods Thirty three legs from 33 consecutive ambulatory patients (average age 9.5 years, SD 6.9 years; 20 males and 13 females) with diplegic CP who underwent preoperative three dimensional gait analysis, foot radiographs, and computed tomography (CT) were included. Adjusted foot progression angle (FPA) was retrieved from gait analysis by correcting pelvic rotation from conventional FPA, which represented the rotational gait deviation of the lower extremity from the tip of the femoral head to the foot. Correlations between rotational gait parameters (FPA, adjusted FPA, average pelvic rotation, average hip rotation, and average knee rotation) and radiologic measurements (acetabular version, femoral anteversion, knee torsion, tibial torsion, and anteroposteriortalo-first metatarsal angle) were analyzed. Multiple regression analysis was performed to identify significant contributing radiographic measurements to adjusted FPA. Results Adjusted FPA was significantly correlated with FPA (r=0.837, pregression analysis, femoral anteversion (p=0.026) and tibial torsion (p=0.034) were found to be the significant contributing structural deformities to the adjusted FPA (R2=0.247). Conclusions Femoral anteversion and tibial torsion were found to be the significant structural deformities that could affect adjusted FPA in patients with diplegic CP. Femoral anteversion and tibial torsion could explain only 24.7% of adjusted FPA. PMID:23767833

  20. Racial and Ethnic Disparities in Health and Health Care: an Assessment and Analysis of the Awareness and Perceptions of Public Health Workers Implementing a Statewide Community Transformation Grant in Texas.

    Science.gov (United States)

    Akinboro, Oladimeji; Ottenbacher, Allison; Martin, Marcus; Harrison, Roderick; James, Thomas; Martin, Eddilisa; Murdoch, James; Linnear, Kim; Cardarelli, Kathryn

    2016-03-01

    Little is known about the awareness of public health professionals regarding racial and ethnic disparities in health in the United States of America (USA). Our study objective was to assess the awareness and perceptions of a group of public health workers in Texas regarding racial health disparities and their chief contributing causes. We surveyed public health professionals working on a statewide grant in Texas, who were participants at health disparities' training workshops. Multivariable logistic regression was employed in examining the association between the participants' characteristics and their perceptions of the social determinants of health as principal causes of health disparities. There were 106 respondents, of whom 38 and 35 % worked in health departments and non-profit organizations, respectively. The racial/ethnic groups with the highest incidence of HIV/AIDS and hypertension were correctly identified by 63 and 50 % of respondents, respectively, but only 17, and 32 % were knowledgeable regarding diabetes and cancer, respectively. Seventy-one percent of respondents perceived that health disparities are driven by the major axes of the social determinants of health. Exposure to information about racial/ethnic health disparities within the prior year was associated with a higher odds of perceiving that social determinants of health were causes of health disparities (OR 9.62; 95 % CI 2.77, 33.41). Among public health workers, recent exposure to information regarding health disparities may be associated with their perceptions of health disparities. Further research is needed to investigate the impact of such exposure on their long-term perception of disparities, as well as the equity of services and programs they administer.

  1. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  2. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  3. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  4. 23 CFR 450.222 - Applicability of NEPA to statewide transportation plans and programs.

    Science.gov (United States)

    2010-04-01

    ... the Secretary concerning a long-range statewide transportation plan or STIP developed through the... 23 Highways 1 2010-04-01 2010-04-01 false Applicability of NEPA to statewide transportation plans... TRANSPORTATION PLANNING AND RESEARCH PLANNING ASSISTANCE AND STANDARDS Statewide Transportation Planning and...

  5. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    -wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.

  6. Piloting a Statewide Home Visiting Quality Improvement Learning Collaborative.

    Science.gov (United States)

    Goyal, Neera K; Rome, Martha G; Massie, Julie A; Mangeot, Colleen; Ammerman, Robert T; Breckenridge, Jye; Lannon, Carole M

    2017-02-01

    Objective To pilot test a statewide quality improvement (QI) collaborative learning network of home visiting agencies. Methods Project timeline was June 2014-May 2015. Overall objectives of this 8-month initiative were to assess the use of collaborative QI to engage local home visiting agencies and to test the use of statewide home visiting data for QI. Outcome measures were mean time from referral to first home visit, percentage of families with at least three home visits per month, mean duration of participation, and exit rate among infants learning. A statewide data system was used to generate monthly run charts. Results Mean time from referral to first home visit was 16.7 days, and 9.4% of families received ≥3 visits per month. Mean participation was 11.7 months, and the exit rate among infants learning network, agencies tested and measured changes using statewide and internal data. Potential next steps are to develop and test new metrics with current pilot sites and a larger collaborative.

  7. A guide for statewide impaired-driving task forces.

    Science.gov (United States)

    2009-09-01

    The purpose of the guide is to assist State officials and other stakeholders who are interested in establishing an : Impaired-Driving Statewide Task Force or who are exploring ways to improve their current Task Force. The guide : addresses issues suc...

  8. California Statewide Plug-In Electric Vehicle Infrastructure Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, Marc; Helwig, Michael

    2014-05-01

    The California Statewide Plug-In Electric Vehicle Infrastructure Assessment conveys to interested parties the Energy Commission’s conclusions, recommendations, and intentions with respect to plug-in electric vehicle (PEV) infrastructure development. There are several relatively low-risk and high-priority electric vehicle supply equipment (EVSE) deployment options that will encourage PEV sales and

  9. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  10. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  11. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  12. Development of statewide geriatric patients trauma triage criteria.

    Science.gov (United States)

    Werman, Howard A; Erskine, Timothy; Caterino, Jeffrey; Riebe, Jane F; Valasek, Tricia

    2011-06-01

    The geriatric population is unique in the type of traumatic injuries sustained, physiological responses to those injuries, and an overall higher mortality when compared to younger adults. No published, evidence-based, geriatric-specific field destination criteria exist as part of a statewide trauma system. The Trauma Committee of the Ohio Emergency Medical Services (EMS) Board sought to develop specific criteria for geriatric trauma victims. A literature search was conducted for all relevant literature to determine potential, geriatric-specific, field-destination criteria. Data from the Ohio Trauma Registry were used to compare elderly patients, defined as age >70 years, to all patients between the ages of 16 to 69 years with regards to mortality risk in the following areas: (1) Glasgow Coma Scale (GCS) score; (2) systolic blood pressure (SBP); (3) falls associated with head, chest, abdominal or spinal injury; (4) mechanism of injury; (5) involvement of more than one body system as defined in the Barell matrix; and (6) co-morbidities and motor vehicle collision with one or more long bone fracture. For GCS score and SBP, those cut-off points with equal or greater risk of mortality as compared to current values were chosen as proposed triage criteria. For other measures, any criterion demonstrating a statistically significant increase in mortality risk was included in the proposed criteria. The following criteria were identified as geriatric-specific criteria: (1) GCS score trauma; (2) SBP trauma. In addition, these data suggested that elderly patients with specific co-morbidities be given strong consideration for evaluation in a trauma center. The state of Ohio is the first state to develop evidence-based geriatric-specific field-destination criteria using data from its state-mandated trauma registry. Further analysis of these criteria will help determine their effects on over-triage and under-triage of geriatric victims of traumatic injuries and the impact on the

  13. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume III contains the following attachments: TRUPACT-II content codes (TRUCON); TRUPACT-II chemical list; chemical compatibility analysis for Rocky Flats Plant waste forms (Appendix 2.10.12 of TRUPACT-II safety analysis report); and chemical compatibility analyses for waste forms across all sites

  14. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  15. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  16. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  17. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  18. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume II contains Appendix A, emergency plan and Appendix B, waste analysis plan. The Waste Isolation Pilot Plant (WIPP) Emergency plan and Procedures (WP 12-9, Rev. 5, 1989) provides an organized plan of action for dealing with emergencies at the WIPP. A contingency plan is included which is in compliance with 40 CFR Part 265, Subpart D. The waste analysis plan provides a description of the chemical and physical characteristics of the wastes to be emplaced in the WIPP underground facility. A detailed discussion of the WIPP Waste Acceptance Criteria and the rationale for its established units are also included

  19. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume V contains the appendices for: closure and post-closure plans; RCRA ground water monitoring waver; Waste Isolation Division Quality Program Manual; water quality sampling plan; WIPP Environmental Procedures Manual; sample handling and laboratory procedures; data analysis; and Annual Site Environmental Monitoring Report for the Waste Isolation Pilot Plant

  20. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  1. Correspondence of biological condition models of California streams at statewide and regional scales

    Science.gov (United States)

    May, Jason T.; Brown, Larry R.; Rehn, Andrew C.; Waite, Ian R.; Ode, Peter R; Mazor, Raphael D; Schiff, Kenneth C

    2015-01-01

    We used boosted regression trees (BRT) to model stream biological condition as measured by benthic macroinvertebrate taxonomic completeness, the ratio of observed to expected (O/E) taxa. Models were developed with and without exclusion of rare taxa at a site. BRT models are robust, requiring few assumptions compared with traditional modeling techniques such as multiple linear regression. The BRT models were constructed to provide baseline support to stressor delineation by identifying natural physiographic and human land use gradients affecting stream biological condition statewide and for eight ecological regions within the state, as part of the development of numerical biological objectives for California’s wadeable streams. Regions were defined on the basis of ecological, hydrologic, and jurisdictional factors and roughly corresponded with ecoregions. Physiographic and land use variables were derived from geographic information system coverages. The model for the entire state (n = 1,386) identified a composite measure of anthropogenic disturbance (the sum of urban, agricultural, and unmanaged roadside vegetation land cover) within the local watershed as the most important variable, explaining 56 % of the variance in O/E values. Models for individual regions explained between 51 and 84 % of the variance in O/E values. Measures of human disturbance were important in the three coastal regions. In the South Coast and Coastal Chaparral, local watershed measures of urbanization were the most important variables related to biological condition, while in the North Coast the composite measure of human disturbance at the watershed scale was most important. In the two mountain regions, natural gradients were most important, including slope, precipitation, and temperature. The remaining three regions had relatively small sample sizes (n ≤ 75 sites) and had models that gave mixed results. Understanding the spatial scale at which land use and land cover affect

  2. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  3. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  4. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  5. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  6. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  7. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  8. Genetic and environmental variance in content dimensions of the MMPI.

    Science.gov (United States)

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  9. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program

  10. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  11. A School-Based Dental Program Evaluation: Comparison to the Massachusetts Statewide Survey.

    Science.gov (United States)

    Culler, Corinna S; Kotelchuck, Milton; Declercq, Eugene; Kuhlthau, Karen; Jones, Kari; Yoder, Karen M

    2017-10-01

    School-based dental programs target high-risk communities and reduce barriers to obtaining dental services by delivering care to students in their schools. We describe the evaluation of a school-based dental program operating in Chelsea, a city north of Boston, with a low-income and largely minority population, by comparing participants' oral health to a Massachusetts oral health assessment. Standardized dental screenings were conducted for students in kindergarten, third, and sixth grades. Outcomes were compared in bivariate analysis, stratified by grade and income levels. A greater percentage of Chelsea students had untreated decay and severe treatment need than students statewide. Yet, fewer Chelsea third graders had severe treatment need, and more had dental sealants. There was no significant difference in the percentage of Chelsea students having severe treatment need or dental sealants by income level. Students participating in our program do not have lower decay levels than students statewide. However, they do have lower levels of severe treatment need, likely due to treatment referrals. Our results confirm that school-based prevention programs can lead to increased prevalence of dental sealants among high-risk populations. Results provide support for the establishment of full-service school-based programs in similar communities. © 2017, American School Health Association.

  12. Avoiding a knowledge gap in a multiethnic statewide social marketing campaign: is cultural tailoring sufficient?

    Science.gov (United States)

    Buchthal, O Vanessa; Doff, Amy L; Hsu, Laura A; Silbanuz, Alice; Heinrich, Katie M; Maddock, Jay E

    2011-03-01

    In 2007, the State of Hawaii, Healthy Hawaii Initiative conducted a statewide social-marketing campaign promoting increased physical activity and nutrition. The campaign included substantial formative research to develop messages tailored for Hawaii's multiethnic Asian and Pacific Islander populations. The authors conducted a statewide random digital dialing telephone survey to assess the campaign's comparative reach among individuals with different ethnicities and different levels of education and income. This analysis suggests that the intervention was successful in reaching its target ethnic audiences. However, a knowledge gap related to the campaign appeared among individuals with incomes less than 130% of the poverty level and those with less than a high school education. These results varied significantly by message and the communication channel used. Recall of supermarket-based messages was significantly higher among individuals below 130% of the poverty level and those between 18 and 35 years of age, 2 groups that showed consistently lower recall of messages in other channels. Results suggest that cultural tailoring for ethnic audiences, although important, is insufficient for reaching low-income populations, and that broad-based social marketing campaigns should consider addressing socioeconomic status-related channel preferences in formative research and campaign design.

  13. Estimation of the additive and dominance variances in South African ...

    African Journals Online (AJOL)

    The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate ...

  14. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Mike

    2013-03-09

    Mar 9, 2013 ... transformation of RFIp and RDGp to z-scores (mean = 0.0, variance = 1.0) and then ... generation pedigree (n = 9 653) used for this analysis. ..... Nkrumah, J.D., Basarab, J.A., Wang, Z., Li, C., Price, M.A., Okine, E.K., Crews Jr., ...

  15. Molecular variance of the Tunisian almond germplasm assessed by ...

    African Journals Online (AJOL)

    The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...

  16. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  17. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  18. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  19. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  20. 23 CFR 450.216 - Development and content of the statewide transportation improvement program (STIP).

    Science.gov (United States)

    2010-04-01

    ... Programming § 450.216 Development and content of the statewide transportation improvement program (STIP). (a... Equity Bonus funds; (5) Emergency relief projects (except those involving substantial functional...

  1. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  2. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  3. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  4. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  5. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  6. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  7. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  8. Are Statewide Data Systems Meeting the Local Institution's Needs? AIR Forum Paper 1978.

    Science.gov (United States)

    Bryson, Charles H.

    Statewide data collection systems emerged in the late sixties as the vehicle to achieving greater efficiency and accountability in higher education. The expectations of statewide systems were that they would meet the needs of various levels of management. The example presented in this paper is the Georgia management information system and its…

  9. Is fMRI ?noise? really noise? Resting state nuisance regressors remove variance with network structure

    OpenAIRE

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed ...

  10. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  11. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  12. Developing a statewide public health initiative to reduce infant mortality in Oklahoma.

    Science.gov (United States)

    Dooley, Suzanna; Patrick, Paul; Lincoln, Alicia; Cline, Janette

    2014-01-01

    The Preparing for a Lifetime, It's Everyone's Responsibility initiative was developed to improve the health and well- being of Oklahoma's mothers and infants. The development phase included systematic data collection, extensive data analysis, and multi-disciplinary partnership development. In total, seven issues (preconception/interconception health, tobacco use, postpartum depression, breastfeeding, infant safe sleep, preterm birth, and infant injury prevention) were identified as crucial to addressing infant mortality in Oklahoma. Workgroups were created to focus on each issue. Data and media communications workgroups were added to further partner commitment and support for policy and programmatic changes across multiple agencies and programs. Leadership support, partnership, evaluation, and celebrating small successes were important factors that lead to large scale adoption and support for the state-wide initiative to reduce infant mortality.

  13. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  14. Characteristics and Effects of a Statewide STEM Program

    Directory of Open Access Journals (Sweden)

    Jeffrey D. Weld

    2015-10-01

    Full Text Available A comprehensive statewide STEM (science, technology, engineering, mathematics reform initiative enters its fifth year in the U.S. state of Iowa. A significant proportion of the state’s pre K-12 students and teachers participate in one or more of the twenty programs offered, ranging from classroom curricular innovations to teacher professional development, and from community STEM festivals to career exploration events. An external, inter-university evaluation consortium measures annual progress of the initiative through the Iowa STEM Monitoring Project. Results show citizens to be increasingly aware of and supporting of STEM education; students to be increasingly interested in STEM as well as outperforming nonparticipating peers on state math and science tests; and teachers more confident and knowledgeable in teaching STEM. Iowa’s STEM initiative has garnered national acclaim though challenges remain with regard to expanding the participation of learners of diversity, as well as ensuring the long-term sustainability of the programs and structures that define Iowa’s statewide STEM initiative.

  15. A Statewide Survey for Container-Breeding Mosquitoes in Mississippi.

    Science.gov (United States)

    Goddard, Jerome; Moraru, Gail M; Mcinnis, Sarah J; Portugal, J Santos; Yee, Donald A; Deerman, J Hunter; Varnado, Wendy C

    2017-09-01

    Container-breeding mosquitoes are important in public health due to outbreaks of Zika, chikungunya, and dengue viruses. This paper documents the distribution of container-breeding mosquito species in Mississippi, with special emphasis on the genus Aedes. Five sites in each of the 82 Mississippi counties were sampled monthly between May 1 and August 31, 2016, and 50,109 mosquitoes in 14 species were collected. The most prevalent and widely distributed species found was Ae. albopictus, being found in all 82 counties, especially during July. A recent invasive, Ae. japonicus, seems to be spreading rapidly in Mississippi since first being discovered in the state in 2011. The most abundant Culex species collected were Cx. quinquefasciatus (found statewide), Cx. salinarius (almost exclusively in the southern portion of the state), and Cx. restuans (mostly central and southern Mississippi). Another relatively recent invasive species, Cx. coronator, was found in 20 counties, predominantly in the southern one-third of the state during late summer. Co-occurrence data of mosquito species found in the artificial containers were also documented and analyzed. Lastly, even though we sampled extensively in 410 sites across Mississippi, no larval Ae. aegypti were found. These data represent the first modern statewide survey of container species in Mississippi, and as such, allows for better public health readiness for emerging diseases and design of more effective vector control programs.

  16. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  17. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...

  18. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  19. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  20. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  1. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  2. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  3. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  4. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  5. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  6. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  7. The smoking ban next door: do hospitality businesses in border areas have reduced sales after a statewide smoke-free policy?

    Science.gov (United States)

    Klein, Elizabeth G; Hood, Nancy E

    2015-01-01

    Despite numerous studies demonstrating no significant economic effects on hospitality businesses following a statewide smoke-free (SF) policy, regional concerns suggest that areas near states without SF policies may experience a loss of hospitality sales across the border. The present study evaluated the impact of Ohio's statewide SF policy on taxable restaurant and bar sales in border and non-border areas. Spline regression analysis was used to assess changes in monthly taxable sales at the county level in full-service restaurants and bars in Ohio. Data were analyzed from four years prior to policy implementation to three years post-policy. Change in the differences in the slope of taxable sales for border (n = 21) and non-border (n = 67) counties were evaluated for changes following the statewide SF policy enforcement, adjusted for unemployment rates, general trends in the hospitality sector, and seasonality. After adjusting for covariates, there was no statistically significant change in the difference in slope for taxable sales for either restaurants (β = 0.9, p = 0.09) or bars (β = 0.2, p = 0.07) following the SF policy for border areas compared to non-border areas of Ohio. Border regions in Ohio did not experience a significant change in bar and restaurant sales compared to non-border areas following a statewide SF policy. Results support that Ohio's statewide SF policy did not impact these two areas differently, and provide additional evidence for the continued use of SF policies to provide protection from exposure to secondhand smoke for both workers and the general public. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  9. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  10. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  11. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  12. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  13. Heritable Environmental Variance Causes Nonlinear Relationships Between Traits: Application to Birth Weight and Stillbirth of Pigs

    NARCIS (Netherlands)

    Mulder, H.A.; Hill, W.G.; Knol, E.F.

    2015-01-01

    There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of

  14. The Theory of Variances in Equilibrium Reconstruction

    International Nuclear Information System (INIS)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-01

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature

  15. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  16. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  17. New Mexico statewide geothermal energy program. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Icerman, L.; Parker, S.K. (ed.)

    1988-04-01

    This report summarizes the results of geothermal energy resource assessment work conducted by the New Mexico Statewide Geothermal Energy Program during the period September 7, 1984, through February 29, 1988, under the sponsorship of the US Dept. of Energy and the State of New Mexico Research and Development Institute. The research program was administered by the New Mexico Research and Development Institute and was conducted by professional staff members at New Mexico State University and Lightning Dock Geothermal, Inc. The report is divided into four chapters, which correspond to the principal tasks delineated in the above grant. This work extends the knowledge of the geothermal energy resource base in southern New Mexico with the potential for commercial applications.

  18. Implementing Statewide Severe Maternal Morbidity Review: The Illinois Experience.

    Science.gov (United States)

    Koch, Abigail R; Roesch, Pamela T; Garland, Caitlin E; Geller, Stacie E

    2018-03-07

    Severe maternal morbidity (SMM) rates in the United States more than doubled between 1998 and 2010. Advanced maternal age and chronic comorbidities do not completely explain the increase in SMM or how to effectively address it. The Centers for Disease Control and Prevention and American College of Obstetricians and Gynecologists have called for facility-level multidisciplinary review of SMM for potential preventability and have issued implementation guidelines. Within Illinois, SMM was identified as any intensive or critical care unit admission and/or 4 or more units of packed red blood cells transfused at any time from conception through 42 days postpartum. All cases meeting this definition were counted during statewide surveillance. Cases were selected for review on the basis of their potential to yield insights into factors contributing to preventable SMM or best practices preventing further morbidity or death. If the SMM review committee deemed a case potentially preventable, it identified specific factors associated with missed opportunities and made actionable recommendations for quality improvement. Approximately 1100 cases of SMM were identified from July 1, 2016, to June 30, 2017, yielding a rate of 76 SMM cases per 10 000 pregnancies. Reviews were conducted on 142 SMM cases. Most SMM cases occurred during delivery hospitalization and more than half were delivered by cesarean section. Hemorrhage was the primary cause of SMM (>50% of the cases). Facility-level SMM review was feasible and acceptable in statewide implementation. States that are planning SMM reviews across obstetric facilities should permit ample time for translation of recommendations to practice. Although continued maternal mortality reviews are valuable, they are not sufficient to address the increasing rates of SMM and maternal death. In-depth multidisciplinary review offers the potential to identify factors associated with SMM and interventions to prevent women from moving along the

  19. Fractal fluctuations and quantum-like chaos in the brain by analysis of variability of brain waves: A new method based on a fractal variance function and random matrix theory: A link with El Naschie fractal Cantorian space-time and V. Weiss and H. Weiss golden ratio in brain

    International Nuclear Information System (INIS)

    Conte, Elio; Khrennikov, Andrei; Federici, Antonio; Zbilut, Joseph P.

    2009-01-01

    We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.

  20. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  1. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  2. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  3. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  4. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  5. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  6. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  7. Development of a statewide motorcycle safety plan for Texas : technical report.

    Science.gov (United States)

    2013-02-01

    The objective of this research project was to develop a statewide plan to reduce motorcycle crashes and : injuries in the state of Texas. The project included a review of published literature on current and proposed : countermeasures for reducing the...

  8. Statewide Transportation Improvement program 2011-2014 : sorted by MPO and RPA.

    Science.gov (United States)

    2010-01-01

    Iowas Statewide Transportation Improvement Program (STIP) has been : developed in conformance with the guidelines prescribed by 23 USC and 49 : USC. The STIP is generated to provide the Federal Highway Administration : (FHWA) and Federal Transit A...

  9. Statewide Transportation Improvement Program 2010-2013 : sorted by MPO and RPA.

    Science.gov (United States)

    2006-01-01

    Iowas Statewide Transportation Improvement Program (STIP) has been : developed in conformance with the guidelines prescribed by 23 USC. The : STIP is generated to provide the Federal Highway Administration (FHWA) and : Federal Transit Administrati...

  10. Statewide Transportation Improvement Program 2012-2015 : sorted by MPO and RPA.

    Science.gov (United States)

    2011-01-01

    Iowas Statewide Transportation Improvement Program (STIP) has been : developed in conformance with the guidelines prescribed by 23 USC and 49 : USC. The STIP is generated to provide the Federal Highway Administration : (FHWA) and Federal Transit A...

  11. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  12. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  13. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  14. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  15. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  16. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  17. Variance of indoor radon concentration: Major influencing factors

    Energy Technology Data Exchange (ETDEWEB)

    Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.

  18. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    We study equity (EVRP) and Treasury variance risk premia (TVRP) jointly and document a number of findings: First, relative to their volatility, TVRP are comparable in magnitude to EVRP. Second, while there is mild positive co-movement between EVRP and TVRP unconditionally, time series estimates...... equity returns for horizons up to 6-months, long maturity TVRP contain robust information for long run equity returns. Finally, exploiting the dynamics of real and nominal Treasuries we document that short maturity break-even rates are a power determinant of the joint dynamics of EVRP, TVRP and their co-movement...... of correlation display distinct spikes in both directions and have been notably volatile since the financial crisis. Third $(i)$ short maturity TVRP predict excess returns on short maturity bonds; $(ii)$ long maturity TVRP and EVRP predict excess returns on long maturity bonds; and $(iii)$ while EVRP predict...

  19. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  20. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  1. Fringe biasing: A variance reduction technique for optically thick meshes

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R. P. [AWE PLC, Aldermaston Reading, Berkshire, RG7 4PR (United Kingdom)

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  2. Fringe biasing: A variance reduction technique for optically thick meshes

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R. P.

    2013-01-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  3. State-wide performance criteria for international safeguards

    International Nuclear Information System (INIS)

    Budlong-Sylvester, K.W.; Pilat, Joseph F.; Stanbro, W.D.

    2001-01-01

    Traditionally, the International Atomic Energy Agency (IAEA) has relied upon prescriptive criteria to guide safeguards implementation. The prospect of replacing prescriptive safeguards criteria with more flexible performance criteria would constitute a structural change in safeguards and raises several important questions. Performance criteria imply that while safeguards goals will be fixed, the means of attaining those goals will not be explicitly prescribed. What would the performance objectives be under such a system? How would they be formulated? How would performance be linked to higher level safeguards objectives? How would safeguards performance be measured State-wide? The implementation of safeguards under performance criteria would also signal a dramatic change in the manner the Agency does business. A higher degree of flexibility could, in principle, produce greater effectiveness and efficiency, but would come with a need for increased Agency responsibility in practice. To the extent that reliance on prescriptive criteria decreases, the burden of justifying actions and ensuring their transparency will rise. Would there need to be limits to safeguards implementation? What would be the basis for setting such limits? This paper addresses these and other issues and questions relating to both the formulation and the implementation of performance-based criteria.

  4. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  5. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  6. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  7. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  8. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  9. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    Science.gov (United States)

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  10. Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise

    Institute of Scientific and Technical Information of China (English)

    Donghui Li; Li Guo

    2006-01-01

    @@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.

  11. VARIANCE COMPONENTS AND SELECTION FOR FEATHER PECKING BEHAVIOR IN LAYING HENS

    OpenAIRE

    Su, Guosheng; Kjaer, Jørgen B.; Sørensen, Poul

    2005-01-01

    Variance components and selection response for feather pecking behaviour were studied by analysing the data from a divergent selection experiment. An investigation show that a Box-Cox transformation with power =-0.2 made the data be approximately normally distributed and fit best by the given model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the two traits in both low feather peckin...

  12. Tactical emergency medical support programs: a comprehensive statewide survey.

    Science.gov (United States)

    Bozeman, William P; Morel, Benjamin M; Black, Timothy D; Winslow, James E

    2012-01-01

    Specially trained tactical emergency medical support (TEMS) personnel provide support to law enforcement special weapons and tactics (SWAT) teams. These programs benefit law enforcement agencies, officers, suspects, and citizens. TEMS programs are increasingly popular, but there are wide variations in their organization and operation and no recent data on their prevalence. We sought to measure the current prevalence and specific characteristics of TEMS programs in a comprehensive fashion in a single southeastern state. North Carolina emergency medical services (EMS) systems have county-based central EMS oversight; each system was surveyed by phone and e-mail. The presence and selected characteristics of TEMS programs were recorded. U.S. Census data were used to measure the population impact of the programs. All of the 101 EMS systems statewide were successfully contacted. Thirty-three counties (33%) have TEMS programs providing medical support to 56 local law enforcement agencies as well as state and federal agencies. TEMS programs tend to be located in more populated urban and suburban areas, serving a population base of 5.9 million people, or 64% of the state's population. Tactical medics in the majority of these programs (29/33; 88%) are not sworn law enforcement officers. Approximately one-third of county-based EMS systems in North Carolina have TEMS programs. These programs serve almost two-thirds of the state's population base, using primarily nonsworn tactical medics. Comparison with other regions of the country will be useful to demonstrate differences in prevalence and program characteristics. Serial surveillance will help track trends and measure the growth and impact of this growing subspecialty field.

  13. Analyzing the Measurement Equivalence of a Translated Test in a Statewide Assessment Program

    Directory of Open Access Journals (Sweden)

    Jorge Carvajal-Espinoza

    2016-09-01

    Full Text Available When tests are translated into one or more languages, the question of the equivalence of items across language forms arises. This equivalence can be assessed at the scale level by means of a multiple group confirmatory factor analysis (CFA in the context of structural equation modeling. This study examined the measurement equivalence of a Spanish translated version of a statewide Mathematics test originally constructed in English by using a multi-group CFA approach. The study used samples of native speakers of the target language of the translation taking the test in both the source and target language, specifically Hispanics taking the test in English and Spanish. Test items were grouped in twelve facet-representative parcels. The parceling was accomplished by grouping items that corresponded to similar content and computing an average for each parcel. Four models were fitted to examine the equivalence of the test across groups. The multi-group CFA fixed factor loadings across groups and results supported the equivalence of the two language versions (English and Spanish of the test. The statistical techniques implemented in this study can also be used to address the performance on a test based on dichotomous or dichotomized variables such as gender, socioeconomic status, geographic location and other variables of interest.

  14. Barriers, facilitators, and potential strategies for increasing HPV vaccination: A statewide assessment to inform action

    Directory of Open Access Journals (Sweden)

    Kathleen B. Cartmell

    2018-06-01

    Full Text Available Objective: The objective was to investigate how state level strategies in South Carolina could maximize HPV vaccine uptake. Design: An environmental scan identified barriers, facilitators, and strategies for improving HPV vaccination in South Carolina. Interviews were conducted with state leaders from relevant organizations such as public health agencies, medical associations, K-12 schools, universities, insurers, and cancer advocacy organizations. A thematic content analysis design was used. Digital interview files were transcribed, a data dictionary was created and data were coded using the data dictionary. Results: Thirty four interviews were conducted with state leaders. Barriers to HPV vaccination included lack of HPV awareness, lack of provider recommendation, HPV vaccine concerns, lack of access and practice-level barriers. Facilitators included momentum for improving HPV vaccination, school-entry Tdap requirement, pharmacy-based HPV vaccination, state immunization registry, HEDIS measures and HPV vaccine funding. Strategies for improving HPV vaccination fell into three categories: 1 addressing lack of awareness about the importance of HPV vaccination among the public and providers; 2 advocating for policy changes around HPV vaccine coverage, vaccine education, and pharmacy-based vaccination; and 3 coordination of efforts. Discussion: A statewide environmental scan generated a blueprint for action to be used to improve HPV vaccination in the state. Keywords: HPV, HPV vaccines, Cervical cancer, Prevention, Health systems, Barriers, Facilitators, Strategies, South Carolina

  15. Draft no-migration variance petition. Volume 1

    International Nuclear Information System (INIS)

    1995-01-01

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2,6 million cubic feet of these waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is Volume 1 which discusses the regulatory frame work, site characterization, facility description, waste description, environmental impact analysis, monitoring, quality assurance, long-term compliance analysis, and regulatory compliance assessment

  16. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  17. Variance Component Selection With Applications to Microbiome Taxonomic Data

    Directory of Open Access Journals (Sweden)

    Jing Zhai

    2018-03-01

    Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  18. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  19. Waste Isolation Pilot Plant no-migration variance petition

    International Nuclear Information System (INIS)

    1990-01-01

    Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ''no-migration'' demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989

  20. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  1. Final Report on the Study of the Impact of the Statewide Systemic Initiatives. Lessons Learned about Designing, Implementing, and Evaluating Statewide Systemic Reform. WCER Working Paper No. 2003-12

    Science.gov (United States)

    Heck, Daniel J.; Weiss, Iris R.; Boyd, Sally E.; Howard, Michael N.; Supovitz, Jonathan A.

    2003-01-01

    This document represents the first of two volumes presented in "Study of the Impact of the Statewide Systemic Initiatives Program" (Norman L. Webb and Iris R. Weiss). In an effort to evaluate the impact of the Statewide Systemic Initiatives (SSIs) on student achievement and the lessons that could be learned from the National Science…

  2. Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data

    Science.gov (United States)

    Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.

    The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.

  3. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  4. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  5. Evaluating RITES, a Statewide Math and Science Partnership Program

    Science.gov (United States)

    Murray, D. P.; Caulkins, J. L.; Burns, A. L.; de Oliveira, G.; Dooley, H.; Brand, S.; Veeger, A.

    2013-12-01

    The Rhode Island Technology-Enhanced Science project (RITES) is a NSF-MSP Program that seeks to improve science education by providing professional development to science teachers at the 5th through 12th grade levels. At it's heart, RITES is a complex, multifaceted project that is challenging to evaluate because of the nature of its goal: the development of a large, statewide partnership between higher education and K12 public school districts during a time when science education strategies and leadership are in flux. As a result, these difficulties often require flexibility and creativity regarding evaluation, study design and data collection. In addition, the research agenda of the project often overlaps with the evaluator's agenda, making collaboration and communication a crucial component of the project's success. In it's 5th year, RITES and it's evaluators have developed a large number of instruments, both qualitative and quantitative, to provide direction and feedback on the effectiveness of the project's activities. RITES personnel work closely with evaluators and researchers to obtain a measure of how RITES' 'theory-of-action' affects both student outcomes and teacher practice. Here we discuss measures of teacher and student content gains, student inquiry gains, and teacher implementation surveys. Using content questions based on AAAS and MOSART databases, teachers in the short courses and students in classrooms showed significant normalized learning gains with averages generally above 0.3. Students of RITES-trained teachers also outperformed their non-RITES peers on the inquiry-section of the NECAP test, and The results show, after controlling for race and economic status, a small but statistically significant increase in test scores for RITES students. Technology use in the classroom significantly increased for teachers who were 'expected implementers' where 'expected implementers' are those teachers who implemented RITES as the project was designed. This

  6. Empowering High School Students in Scientific Careers: Developing Statewide Partnerships

    Science.gov (United States)

    Aguilar, C.; Swartz, D.

    2008-05-01

    Center for Multiscale Modeling of Atmospheric Processes (CMMAP) is a National Science Foundation Science and Technology Center focused on improving the representation of cloud processes in climate models. The Center is divided into three sections including Knowledge Transfer, Research, and Education and Diversity. The Science Education and Diversity mission is to educate and train people with diverse backgrounds in Climate and Earth System Science by enhancing teaching and learning and disseminating science results through multiple media. CMMAP is partnering with two local school districts to host an annual global climate conferences for high school students. The 2008 Colorado Global Climate Conference seeks "To educate students on global and local climate issues and empower them to se their knowledge." The conference is sponsored by CMMAP, The Governor's Energy Office, Poudre School District, Thompson School District, Clif Bar, and Ben and Jerry's Scoop Shop of Fort Collins. The conference seeks to inspire students to pursue future education and careers in science fields. Following an opening welcome from the Governor's Energy Office, Keynote Piers Sellers will discuss his experiences as an atmospheric scientist and NASA astronaut. Students will then attend 3 out of 16 breakout sessions including such sessions as "Hot poems, Cool Paintings, and the treasures of Antiquity of Climate Change", "Mitigation vs Adaptation", "Bigfoot Walks(What Size is our carbon footprint?)" "The Wedges: Reduc ing Carbon Emissions", and "We the People: Climate and Culture of Climate Change" to name a few. Using The Governor's High School Conference on the Environment sponsored by the Wisconsin Center for Environmental Education as a model we are developing statewide partnerships to bring high school students together to look at global climate issues that will impact their future and of which they can be part of the solution through their education and career paths. In addition to

  7. FINAL TECHNICAL REPORT FOR FORESTRY BIOFUEL STATEWIDE COLLABORATION CENTER (MICHIGAN)

    Energy Technology Data Exchange (ETDEWEB)

    LaCourt, Donna M.; Miller, Raymond O.; Shonnard, David R.

    2012-04-24

    A team composed of scientists from Michigan State University (MSU) and Michigan Technological University (MTU) assembled to better understand, document, and improve systems for using forest-based biomass feedstocks in the production of energy products within Michigan. Work was funded by a grant (DE-EE-0000280) from the U.S. Department of Energy (DOE) and was administered by the Michigan Economic Development Corporation (MEDC). The goal of the project was to improve the forest feedstock supply infrastructure to sustainably provide woody biomass for biofuel production in Michigan over the long-term. Work was divided into four broad areas with associated objectives: • TASK A: Develop a Forest-Based Biomass Assessment for Michigan – Define forest-based feedstock inventory, availability, and the potential of forest-based feedstock to support state and federal renewable energy goals while maintaining current uses. • TASK B: Improve Harvesting, Processing and Transportation Systems – Identify and develop cost, energy, and carbon efficient harvesting, processing and transportation systems. • TASK C: Improve Forest Feedstock Productivity and Sustainability – Identify and develop sustainable feedstock production systems through the establishment and monitoring of a statewide network of field trials in forests and energy plantations. • TASK D: Engage Stakeholders – Increase understanding of forest biomass production systems for biofuels by a broad range of stakeholders. The goal and objectives of this research and development project were fulfilled with key model deliverables including: 1) The Forest Biomass Inventory System (Sub-task A1) of feedstock inventory and availability and, 2) The Supply Chain Model (Sub-task B2). Both models are vital to Michigan’s forest biomass industry and support forecasting delivered cost, as well as carbon and energy balance. All of these elements are important to facilitate investor, operational and policy decisions. All

  8. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    Science.gov (United States)

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  9. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  10. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 2. Assessment of MCNP Statistical Analysis of keff Eigenvalue Convergence with an Analytical Criticality Verification Test Set

    International Nuclear Information System (INIS)

    Sood, Avnet; Forster, R. Arthur; Parsons, D. Kent

    2001-01-01

    Monte Carlo simulations of nuclear criticality eigenvalue problems are often performed by general purpose radiation transport codes such as MCNP. MCNP performs detailed statistical analysis of the criticality calculation and provides feedback to the user with warning messages, tables, and graphs. The purpose of the analysis is to provide the user with sufficient information to assess spatial convergence of the eigenfunction and thus the validity of the criticality calculation. As a test of this statistical analysis package in MCNP, analytic criticality verification benchmark problems have been used for the first time to assess the performance of the criticality convergence tests in MCNP. The MCNP statistical analysis capability has been recently assessed using the 75 multigroup criticality verification analytic problem test set. MCNP was verified with these problems at the 10 -4 to 10 -5 statistical error level using 40 000 histories per cycle and 2000 active cycles. In all cases, the final boxed combined k eff answer was given with the standard deviation and three confidence intervals that contained the analytic k eff . To test the effectiveness of the statistical analysis checks in identifying poor eigenfunction convergence, ten problems from the test set were deliberately run incorrectly using 1000 histories per cycle, 200 active cycles, and 10 inactive cycles. Six problems with large dominance ratios were chosen from the test set because they do not achieve the normal spatial mode in the beginning of the calculation. To further stress the convergence tests, these problems were also started with an initial fission source point 1 cm from the boundary thus increasing the likelihood of a poorly converged initial fission source distribution. The final combined k eff confidence intervals for these deliberately ill-posed problems did not include the analytic k eff value. In no case did a bad confidence interval go undetected. Warning messages were given signaling that

  11. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  12. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  13. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  14. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  15. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  16. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  17. Avaliação de quatro alternativas de análise de experimentos em látice quadrado, quanto à estimação de componentes de variância Evaluation of four alternatives of analysis of experiments in square lattice, with emphasis on estimate of variance component

    Directory of Open Access Journals (Sweden)

    HEYDER DINIZ SILVA

    2000-01-01

    Full Text Available Estudou-se, no presente trabalho, a eficiência das seguintes alternativas de análise de experimentos realizados em látice quanto à precisão na estimação de componentes de variância, através da simulação computacional de dados: i análise intrablocos do látice com tratamentos ajustados (primeira análise; ii análise do látice em blocos casualizados completos (segunda análise; iii análise intrablocos do látice com tratamentos não-ajustados (terceira análise; iv análise do látice como blocos casualizados completos, utilizando as médias ajustadas dos tratamentos, obtidas a partir da análise com recuperação da informação interblocos, tendo como quadrado médio do resíduo a variância efetiva média dessa análise do látice (quarta análise. Os resultados obtidos mostram que se deve utilizar o modelo de análise intrablocos de experimentos em látice para se estimarem componentes de variância sempre que a eficiência relativa do delineamento em látice, em relação ao delineamento em Blocos Completos Casualizados, for superior a 100% e, em caso contrário, deve-se optar pelo modelo de análise em Blocos Casualizados Completos. A quarta alternativa de análise não deve ser recomendada em qualquer das duas situações.The efficiency of fur alternatives of analysis of experiments in square lattice, related to the estimation of variance components, was studied through computational simulation of data: i intrablock analysis of the lattice with adjusted treatments (first analysis; ii lattices analysis as a randomized complete blocks design (second analysis; iii; intrablock analysis of the lattice with non-adjusted treatments (third analysis; iv lattice analysis as a randomized complete blocks design, using the adjusted means of treatments, obtained through the analysis of lattice with recuperation of interblocks information, having as the residual mean square, the average effective variance of this same lattice analysis

  18. Motor equivalence and structure of variance: multi-muscle postural synergies in Parkinson's disease.

    Science.gov (United States)

    Falaki, Ali; Huang, Xuemei; Lewis, Mechelle M; Latash, Mark L

    2017-07-01

    We explored posture-stabilizing multi-muscle synergies with two methods of analysis of multi-element, abundant systems: (1) Analysis of inter-cycle variance; and (2) Analysis of motor equivalence, both quantified within the framework of the uncontrolled manifold (UCM) hypothesis. Data collected in two earlier studies of patients with Parkinson's disease (PD) were re-analyzed. One study compared synergies in the space of muscle modes (muscle groups with parallel scaling of activation) during tasks performed by early-stage PD patients and controls. The other study explored the effects of dopaminergic medication on multi-muscle-mode synergies. Inter-cycle variance and absolute magnitude of the center of pressure displacement across consecutive cycles were quantified during voluntary whole-body sway within the UCM and orthogonal to the UCM space. The patients showed smaller indices of variance within the UCM and motor equivalence compared to controls. The indices were also smaller in the off-drug compared to on-drug condition. There were strong across-subject correlations between the inter-cycle variance within/orthogonal to the UCM and motor equivalent/non-motor equivalent displacements. This study has shown that, at least for cyclical tasks, analysis of variance and analysis of motor equivalence lead to metrics of stability that correlate with each other and show similar effects of disease and medication. These results show, for the first time, intimate links between indices of variance and motor equivalence. They suggest that analysis of motor equivalence, which requires only a handful of trials, could be used broadly in the field of motor disorders to analyze problems with action stability.

  19. 2011-2013 Indiana Statewide Imagery and LiDAR Program: Lake Michigan Watershed Counties

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Indiana's Statewide LiDAR data is produced at 1.5-meter average post spacing for all 92 Indiana Counties covering more than 36,420 square miles. New LiDAR data was...

  20. Criminal Justice Profile--Statewide, 1984. Supplement to "Crime and Delinquency in California."

    Science.gov (United States)

    California State Dept. of Justice, Sacramento. Bureau of Criminal Statistics and Special Services.

    This California annual Criminal Justice Statewide Profile presents data which supplements the Bureau of Criminal Statistics' (BCS) annual Crime and Delinquency publication. This monograph summarizes and combines data pertaining to California's justice system. The profile consists of two sections. The first section consists of 12 tables displaying…

  1. Statewide Dissemination of a Rural, Non-Chain Restaurant Intervention: Adoption, Implementation and Maintenance

    Science.gov (United States)

    Nothwehr, F.; Haines, H.; Chrisman, M.; Schultz, U.

    2014-01-01

    The obesity epidemic calls for greater dissemination of nutrition-related programs, yet there remain few studies of the dissemination process. This study, guided by elements of the RE-AIM model, describes the statewide dissemination of a simple, point-of-purchase restaurant intervention. Conducted in rural counties of the Midwest, United States,…

  2. Taking Stock of Private-School Choice: Scholars Review the Research on Statewide Programs

    Science.gov (United States)

    Wolf, Patrick J.; Harris, Douglas N.; Berends, Mark; Waddington, R. Joseph; Austin, Megan

    2018-01-01

    In the past few years, four states have established programs that provide public financial support to students who choose to attend a private school. These programs--a tax-credit-funded scholarship initiative in Florida and voucher programs in Indiana, Louisiana, and Ohio--offer a glimpse of what expansive statewide choice might look like. What…

  3. Hawai'i Youth at Risk? Conceptual Challenges in Communicating a Statewide Mentoring Initiative.

    Science.gov (United States)

    Pollard, Vincent Kelly

    This paper discusses several issues considered as part of a statewide mentoring initiative. It is divided into three sections. The first section summarizes the key issues associated with short-term mentoring and mentoring in a longer-term, socially transformative context. Data from Comprehensive School Alienation Program is discussed concerning…

  4. Road Map to Statewide Implementation of the Pyramid Model. Roadmap to Effective Intervention Practices #6

    Science.gov (United States)

    Dunlap, Glen; Smith, Barbara J.; Fox, Lise; Blase, Karen

    2014-01-01

    This document is a guide--a "Road Map"--for implementing widespread use of the Pyramid Model for Promoting Social Emotional Competence in Infants and Young Children (http://www.challengingbehavior.org/do/pyramid_model. htm). It is a road map of systems change. The Road Map is written for statewide systems change, although it could be…

  5. Statewide improvement approach to clinician burnout: Findings from the baseline year

    Directory of Open Access Journals (Sweden)

    Heather R. Britt

    2017-12-01

    We propose a socio-ecological framework for acting on burnout, using a data-driven quality improvement paradigm enabled by a statewide coalition, to ensure that continued efforts do not rest solely at the feet of individuals or systems. Despite high burnout levels, engagement and satisfaction with work are also high, suggesting there is still hope for stemming the tide of burnout.

  6. Results of the 1992 State-Wide Business and Industry Survey.

    Science.gov (United States)

    Jarrett, Carole, Comp.; And Others

    As part of an effort to develop courses and programs that reflect California business and industry's current and future needs, two studies were performed by Solano Community College to examine statewide trends and issues related to office automation and marketing and management. In conducting the study of office automation, 5,000 surveys were…

  7. A case study: planning a statewide information resource for health professionals: an evidence-based approach

    Science.gov (United States)

    Chew, Katherine; Watson, Linda; Parker, Mary

    2009-01-01

    Question: What is the best approach for implementing a statewide electronic health library (eHL) to serve all health professionals in Minnesota? Setting: The research took place at the University of Minnesota Health Sciences Libraries. Methods: In January 2008, the authors began planning a statewide eHL for health professionals following the five-step process for evidence-based librarianship: formulating the question, finding the best evidence, appraising the evidence, assessing costs and benefits, and evaluating the effectiveness of resulting actions. Main Results: The authors identified best practices for developing a statewide eHL for health professionals relating to audience or population served, information resources, technology and access, funding model, and implementation and sustainability. They were compared to the mission of the eHL project to drive strategic directions by developing recommendations. Conclusion: EBL can guide the planning process for a statewide eHL, but findings must be tailored to the local environment to address information needs and ensure long-term sustainability. PMID:19851487

  8. A Multilevel, Statewide Investigation of School District Anti-Bullying Policy Quality and Student Bullying Involvement

    Science.gov (United States)

    Gower, Amy L.; Cousin, Molly; Borowsky, Iris W.

    2017-01-01

    Background: Although nearly all states in the United States require school districts to adopt anti-bullying policies, little research examines the effect of these policies on student bullying and health. Using a statewide sample, we investigated associations between the quality of school district anti-bullying policies and student bullying…

  9. Making an Impact Statewide to Benefit 21st-Century School Leadership

    Science.gov (United States)

    Hewitt, Kimberly Kappler; Mullen, Carol A.; Davis, Ann W.; Lashley, Carl

    2012-01-01

    How can institutions of higher education, local education agencies, and departments of education partner to build capacity for 21st-Century school leadership? The model (IMPACT V) we describe utilizes a systems-wide partnership approach to cultivate shared leadership within influenced middle and high schools statewide to leverage technology as a…

  10. From Theory to Practice: Considerations for Implementing a Statewide Voucher System.

    Science.gov (United States)

    Doyle, Denis P.

    This monograph analyzes trends in American educational philosophy and history in its proposal to implement an all-public statewide school voucher system. Following an introduction, section 1, "Alternative Voucher Systems," discusses three concepts: universal unregulated vouchers, favored by Milton Friedman; regulated compensatory vouchers,…

  11. Predictors of Suicidal Ideation in a Statewide Sample of Transgender Individuals

    OpenAIRE

    Rood, Brian A.; Puckett, Julia A.; Pantalone, David W.; Bradford, Judith B.

    2015-01-01

    Transgender individuals experience violence and discrimination, which, in addition to gender transitioning, are established correlates of psychological distress. In a statewide sample of 350 transgender adults, we investigated whether a history of violence and discrimination increased the odds of reporting lifetime suicidal ideation (SI) and whether differences in SI were predicted by gender transition status. Violence, discrimination, and transition status significantly predicted SI. Compare...

  12. Contracting for Statewide Student Achievement Tests: A Review. Department of Public Instruction 98-4.

    Science.gov (United States)

    Wisconsin State Legislative Audit Bureau, Madison.

    The Wisconsin legislature has required the Department of Public Instruction to adopt or approve standardized tests for statewide use to measure student attainment of knowledge and concepts in grades 4, 8, and 10. Although school districts generally gave high ratings to the contents of TerraNova (McGraw Hill), the testing instrument most recently…

  13. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  14. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  15. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  16. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  17. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  18. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  19. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  20. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  1. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  2. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  3. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  4. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  5. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  6. Pediatric Exposures to Topical Benzocaine Preparations Reported to a Statewide Poison Control System

    Directory of Open Access Journals (Sweden)

    Rais Vohra

    2017-07-01

    Full Text Available Introduction: Topical benzocaine is a local anesthetic commonly used to relieve pain caused by teething, periodontal irritation, burns, wounds, and insect bites. Oral preparations may contain benzocaine concentrations ranging from 7.5% to 20%. Pediatric exposure to such large concentrations may result in methemoglobinemia and secondarily cause anemia, cyanosis, and hypoxia. Methods: This is a retrospective study of exposures reported to a statewide poison control system. The electronic health records were queried for pediatric exposures to topical benzocaine treated at a healthcare facility from 2004 to 2014. Cases of benzocaine exposure were reviewed for demographic and clinical information, and descriptive statistical analysis was performed. Results: The query resulted in 157 cases; 58 were excluded due to co-ingestants, or miscoding of non-benzocaine exposures. Children four years of age and younger represented the majority of cases (93% with a median age of 1 year. There were 88 cases of accidental/ exploratory exposure, while 6 cases resulted from therapeutic application or error, 4 cases from adverse reactions, and 1 case from an unknown cause. Asymptomatic children accounted for 75.5% of cases, but major clinical effects were observed in 5 patients. Those with serious effects were exposed to a range of benzocaine concentrations (7.5–20%, with 4 cases reporting methemoglobin levels between 20.2%–55%. Methylene blue was administered in 4 of the cases exhibiting major effects. Conclusion: The majority of exposures were accidental ingestions by young children. Most exposures resulted in minor to no effects. However, some patients required treatment with methylene blue and admission to a critical care unit. Therapeutic application by parents or caregivers may lead to adverse effects from these commonly available products.

  7. Statewide Inferior Vena Cava Filter Placement, Complications, and Retrievals: Epidemiology and Recent Trends.

    Science.gov (United States)

    Charalel, Resmi A; Durack, Jeremy C; Mao, Jialin; Ross, Joseph S; Meltzer, Andrew J; Sedrakyan, Art

    2018-03-01

    Public awareness of inferior vena cava (IVC) filter-related controversies has been elevated by the Food and Drug Administration (FDA) safety communication in 2010. To examine population level trends in IVC filter utilization, complications, retrieval rates, and subsequent pulmonary embolism (PE) risk. A retrospective cohort study. Patients receiving IVC filters during 2005-2014 in New York State. IVC filter-specific complications, new PE occurrences and IVC filter retrievals were evaluated as time-to-event data using Kaplan-Meier analysis. Estimated cumulative risks were obtained at various timepoints during follow-up. There were 91,873 patients receiving IVC filters between 2005 and 2014 in New York State included in the study. The average patient age was 67 years and 46.6% were male. Age-adjusted rates of IVC filter placement increased from 48 cases/100,000 in 2005 to 52 cases/100,000 in 2009, and decreased afterwards to 36 cases/100,000 in 2014. The estimated risks of having an IVC filter-related complication and filter retrieval within 1 year was 1.5% [95% confidence interval (CI), 1.4%-1.6%] and 3.5% (95% CI, 3.4%-3.6%). One-year retrieval rate was higher post-2010 when compared with pre-2010 years (hazard ratio, 2.70; 95% CI, 2.50-2.91). Among the 58,176 patients who did not have PE events before or at the time of IVC filter placement, the estimated risk of developing subsequent PE at 1 year was 2.0% (95% CI, 1.9%-2.1%). Our findings suggest that FDA communications may be effective in modifying statewide clinical practices. Given the 2% observed PE rate following prophylactic IVC filter placement, large scale pragmatic studies are needed to determine contemporary safety and effectiveness of IVC filters.

  8. Effectiveness of a Statewide Abusive Head Trauma Prevention Program in North Carolina.

    Science.gov (United States)

    Zolotor, Adam J; Runyan, Desmond K; Shanahan, Meghan; Durrance, Christine Piette; Nocera, Maryalice; Sullivan, Kelly; Klevens, Joanne; Murphy, Robert; Barr, Marilyn; Barr, Ronald G

    2015-12-01

    Abusive head trauma (AHT) is a serious condition, with an incidence of approximately 30 cases per 100,000 person-years in the first year of life. To assess the effectiveness of a statewide universal AHT prevention program. In total, 88.29% of parents of newborns (n = 405 060) in North Carolina received the intervention (June 1, 2009, to September 30, 2012). A comparison of preintervention and postintervention was performed using nurse advice line telephone calls regarding infant crying (January 1, 2005, to December 31, 2010). A difference-in-difference analysis compared AHT rates in the prevention program state with those of other states before and after the implementation of the program (January 1, 2000, to December 31, 2011). The Period of PURPLE Crying intervention, developed by the National Center on Shaken Baby Syndrome, was delivered by nurse-provided education, a DVD, and a booklet, with reinforcement by primary care practices and a media campaign. Changes in proportions of telephone calls for crying concerns to a nurse advice line and in AHT rates per 100,000 infants after the intervention (June 1, 2009, to September 30, 2011) in the first year of life using hospital discharge data for January 1, 2000, to December 31, 2011. In the 2 years after implementation of the intervention, parental telephone calls to the nurse advice line for crying declined by 20% for children younger than 3 months (rate ratio, 0.80; 95% CI, 0.73-0.87; P programmatic efforts and evaluation are needed to demonstrate an effect on AHT rates.

  9. Methicillin-resistant Staphylococcus aureus in Saarland, Germany: a statewide admission prevalence screening study.

    Directory of Open Access Journals (Sweden)

    Mathias Herrmann

    Full Text Available BACKGROUND: The screening of hospital admission patients for methicillin resistant Staphylococcus aureus (MRSA is of undisputed value in controlling and reducing the overall MRSA burden; yet, a concerted parallel universal screening intervention throughout all hospitals of an entire German Federal State has not yet been performed. METHODOLOGY/PRINCIPAL FINDINGS: During a four-week period, all 24 acute care hospitals of the State of Saarland participated in admission prevalence screening. Overall, 436/20,027 screened patients revealed MRSA carrier status (prevalence, 2.2/100 patients with geriatrics and intensive care departments associated with highest prevalence (7.6/100 and 6.3/100, respectively. Risk factor analysis among 17,975 admission patients yielded MRSA history (OR, 4.3; CI₉₅ 2.7-6.8, a skin condition (OR, 3.2; CI₉₅ 2.1-5.0, and/or an indwelling catheter (OR, 2.2; CI₉₅ 1.4-3.5 among the leading risks. Hierarchical risk factor ascertainment of the six risk factors associated with highest odd's ratios would require 31% of patients to be laboratory screened to allow for detection of 67% of all MRSA positive admission patients in the State. CONCLUSIONS/SIGNIFICANCE: State-wide admission prevalence screening in conjunction with risk factor ascertainment yields important information on the distribution of the MRSA burden for hospitals, and allows for data-based decisions on local or institutional MRSA screening policies considering risk factor prevalence and expected MRSA identification rates.

  10. A statewide teleradiology system reduces radiation exposure and charges in transferred trauma patients.

    Science.gov (United States)

    Watson, Justin J J; Moren, Alexis; Diggs, Brian; Houser, Ben; Eastes, Lynn; Brand, Dawn; Bilyeu, Pamela; Schreiber, Martin; Kiraly, Laszlo

    2016-05-01

    Trauma transfer patients routinely undergo repeat imaging because of inefficiencies within the radiology system. In 2009, the virtual private network (VPN) telemedicine system was adopted throughout Oregon allowing virtual image transfer between hospitals. The startup cost was a nominal $3,000 per hospital. A retrospective review from 2007 to 2012 included 400 randomly selected adult trauma transfer patients based on a power analysis (200 pre/200 post). The primary outcome evaluated was reduction in repeat computed tomography (CT) scans. Secondary outcomes included cost savings, emergency department (ED) length of stay (LOS), and spared radiation. All data were analyzed using Mann-Whitney U and chi-square tests. P less than .05 indicated significance. Spared radiation was calculated as a weighted average per body region, and savings was calculated using charges obtained from Oregon Health and Science University radiology current procedural terminology codes. Four-hundred patients were included. Injury Severity Score, age, ED and overall LOS, mortality, trauma type, and gender were not statistically different between groups. The percentage of patients with repeat CT scans decreased after VPN implementation: CT abdomen (13.2% vs 2.8%, P < .01) and cervical spine (34.4% vs 18.2%, P < .01). Post-VPN, the total charges saved in 2012 for trauma transfer patients was $333,500, whereas the average radiation dose spared per person was 1.8 mSV. Length of stay in the ED for patients with Injury Severity Score less than 15 transferring to the ICU was decreased (P < .05). Implementation of a statewide teleradiology network resulted in fewer total repeat CT scans, significant savings, decrease in radiation exposure, and decreased LOS in the ED for patients with less complex injuries. The potential for health care savings by widespread adoption of a VPN is significant. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Pediatric Exposures to Topical Benzocaine Preparations Reported to a Statewide Poison Control System.

    Science.gov (United States)

    Vohra, Rais; Huntington, Serena; Koike, Jennifer; Le, Kevin; Geller, Richard J

    2017-08-01

    Topical benzocaine is a local anesthetic commonly used to relieve pain caused by teething, periodontal irritation, burns, wounds, and insect bites. Oral preparations may contain benzocaine concentrations ranging from 7.5% to 20%. Pediatric exposure to such large concentrations may result in methemoglobinemia and secondarily cause anemia, cyanosis, and hypoxia. This is a retrospective study of exposures reported to a statewide poison control system. The electronic health records were queried for pediatric exposures to topical benzocaine treated at a healthcare facility from 2004 to 2014. Cases of benzocaine exposure were reviewed for demographic and clinical information, and descriptive statistical analysis was performed. The query resulted in 157 cases; 58 were excluded due to co-ingestants, or miscoding of non-benzocaine exposures. Children four years of age and younger represented the majority of cases (93%) with a median age of 1 year. There were 88 cases of accidental/ exploratory exposure, while 6 cases resulted from therapeutic application or error, 4 cases from adverse reactions, and 1 case from an unknown cause. Asymptomatic children accounted for 75.5% of cases, but major clinical effects were observed in 5 patients. Those with serious effects were exposed to a range of benzocaine concentrations (7.5-20%), with 4 cases reporting methemoglobin levels between 20.2%-55%. Methylene blue was administered in 4 of the cases exhibiting major effects. The majority of exposures were accidental ingestions by young children. Most exposures resulted in minor to no effects. However, some patients required treatment with methylene blue and admission to a critical care unit. Therapeutic application by parents or caregivers may lead to adverse effects from these commonly available products.

  12. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  13. Multilevel Modeling of the Performance Variance

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  14. A pattern recognition approach to transistor array parameter variance

    Science.gov (United States)

    da F. Costa, Luciano; Silva, Filipi N.; Comin, Cesar H.

    2018-06-01

    The properties of semiconductor devices, including bipolar junction transistors (BJTs), are known to vary substantially in terms of their parameters. In this work, an experimental approach, including pattern recognition concepts and methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), was used to experimentally investigate the variation among BJTs belonging to integrated circuits known as transistor arrays. It was shown that a good deal of the devices variance can be captured using only two PCA axes. It was also verified that, though substantially small variation of parameters is observed for BJT from the same array, larger variation arises between BJTs from distinct arrays, suggesting the consideration of device characteristics in more critical analog designs. As a consequence of its supervised nature, LDA was able to provide a substantial separation of the BJT into clusters, corresponding to each transistor array. In addition, the LDA mapping into two dimensions revealed a clear relationship between the considered measurements. Interestingly, a specific mapping suggested by the PCA, involving the total harmonic distortion variation expressed in terms of the average voltage gain, yielded an even better separation between the transistor array clusters. All in all, this work yielded interesting results from both semiconductor engineering and pattern recognition perspectives.

  15. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    Science.gov (United States)

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  16. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  18. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  19. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  20. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  1. The Pricing of European Options Under the Constant Elasticity of Variance with Stochastic Volatility

    Science.gov (United States)

    Bock, Bounghun; Choi, Sun-Yong; Kim, Jeong-Hoon

    This paper considers a hybrid risky asset price model given by a constant elasticity of variance multiplied by a stochastic volatility factor. A multiscale analysis leads to an asymptotic pricing formula for both European vanilla option and a Barrier option near the zero elasticity of variance. The accuracy of the approximation is provided in a rigorous manner. A numerical experiment for implied volatilities shows that the hybrid model improves some of the well-known models in view of fitting the data for different maturities.

  2. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  3. The efficiency of the crude oil markets: Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  4. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  5. The efficiency of the crude oil markets. Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  6. Genetic factors explain half of all variance in serum eosinophil cationic protein

    DEFF Research Database (Denmark)

    Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie

    2014-01-01

    with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....

  7. Intercentre variance in patient reported outcomes is lower than objective rheumatoid arthritis activity measures

    DEFF Research Database (Denmark)

    Khan, Nasim Ahmed; Spencer, Horace Jack; Nikiphorou, Elena

    2017-01-01

    Objective: To assess intercentre variability in the ACR core set measures, DAS28 based on three variables (DAS28v3) and Routine Assessment of Patient Index Data 3 in a multinational study. Methods: Seven thousand and twenty-three patients were recruited (84 centres; 30 countries) using a standard...... built to adjust for the remaining ACR core set measure (for each ACR core set measure or each composite index), socio-demographics and medical characteristics. ANOVA and analysis of covariance models yielded similar results, and ANOVA tables were used to present variance attributable to recruiting...... centre. Results: The proportion of variances attributable to recruiting centre was lower for patient reported outcomes (PROs: pain, HAQ, patient global) compared with objective measures (joint counts, ESR, physician global) in all models. In the full model, variance in PROs attributable to recruiting...

  8. Is fMRI “noise” really noise? Resting state nuisance regressors remove variance with network structure

    Science.gov (United States)

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured “signal” as well as “noise.” Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. PMID:25862264

  9. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  10. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  11. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  12. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  13. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  14. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.

  15. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  16. Development of a central data warehouse for statewide ITS and transportation data in Florida phase III : final report.

    Science.gov (United States)

    2009-12-15

    This report documents Phase III of the development and operation of a prototype for the Statewide Transportation : Engineering Warehouse for Archived Regional Data (STEWARD). It reflects the progress on the development and : operation of STEWARD sinc...

  17. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  18. Waste Isolation Pilot Plant No-Migration Variance Petition

    International Nuclear Information System (INIS)

    1990-03-01

    The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA section 3004(d) and 40 CFR section 268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA's NOD and met with the EPA's reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0

  19. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  20. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  1. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  2. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  3. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  4. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  5. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  6. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  7. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  8. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  9. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  10. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  11. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  12. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    Science.gov (United States)

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.

  13. A task force model for statewide change in nursing education: building quality and safety.

    Science.gov (United States)

    Mundt, Mary H; Clark, Margherita Procaccini; Klemczak, Jeanette Wrona

    2013-01-01

    The purpose of this article was to describe a statewide planning process to transform nursing education in Michigan to improve quality and safety of patient care. A task force model was used to engage diverse partners in issue identification, consensus building, and recommendations. An example of a statewide intervention in nursing education and practice that was executed was the Michigan Quality and Safety in Nursing Education Institute, which was held using an integrated approach to academic-practice partners from all state regions. This paper describes the unique advantage of leadership by the Michigan Chief Nurse Executive, the existence of a nursing strategic plan, and a funding model. An overview of the Task Force on Nursing Education is presented with a focus on the model's 10 process steps and resulting seven recommendations. The Michigan Nurse Education Council was established to implement the recommendations that included quality and safety. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Evaluation of a statewide science inservice and outreach program: Teacher and student outcomes

    Science.gov (United States)

    Lott, Kimberly Hardiman

    Alabama Science in Motion (ASIM) is a statewide in-service and outreach program designed to provide in-service training for teachers in technology and content knowledge. ASIM is also designed to increase student interest in science and future science careers. The goals of ASIM include: to complement, enhance and facilitate implementation of the Alabama Course of Study: Science, to increase student interest in science and scientific careers, and to provide high school science teachers with curriculum development and staff development opportunities that will enhance their subject-content expertise, technology background, and instructional skills. This study was conducted to evaluate the goals and other measurable outcomes of the chemistry component of ASIM. Data were collected from 19 chemistry teachers and 182 students that participated in ASIM and 6 chemistry teachers and 42 students that do not participate in ASIM using both surveys and student records. Pre-treatment Chi-Square tests revealed that the teachers did not differ in years of chemistry teaching experience, major in college, and number of classes other than chemistry taught. Pre-treatment Chi-Square tests revealed that the students did not differ in age, ethnicity, school classification, or school type. The teacher survey used measured attitudes towards inquiry-based teaching, frequency of technology used by teacher self-report and perceived teaching ability of chemistry topics from the Alabama Course of Study-Science. The student surveys used were the Test of Science Related Attitudes (TOSRA) and a modified version of the Test of Integrated Process Skills (TIPS). The students' science scores from the Stanford Achievement Test (SAT-9) were also obtained from student records. Analysis of teacher data using a MANOVA design revealed that participation in ASIM had a significantly positive effect on teacher attitude towards inquiry-based teaching and the frequency of technology used; however, there was no

  15. Staff turnover in statewide implementation of ACT: relationship with ACT fidelity and other team characteristics

    OpenAIRE

    Rollins, Angela L.; Salyers, Michelle P.; Tsai, Jack; Lydick, Jennifer M.

    2010-01-01

    Staff turnover on assertive community treatment (ACT) teams is a poorly understood phenomenon. This study examined annual turnover and fidelity data collected in a statewide implementation of ACT over a 5-year period. Mean annual staff turnover across all observations was 30.0%. Turnover was negatively correlated with overall fidelity at Year 1 and 3. The team approach fidelity item was negatively correlated with staff turnover at Year 3. For 13 teams with 3 years of follow-up data, turnover ...

  16. Building Statewide Infrastructure for the Academic Support of Students With Mild Traumatic Brain Injury.

    Science.gov (United States)

    Gioia, Gerard A; Glang, Ann E; Hooper, Stephen R; Brown, Brenda Eagan

    To focus attention on building statewide capacity to support students with mild traumatic brain injury (mTBI)/concussion. Consensus-building process with a multidisciplinary group of clinicians, researchers, policy makers, and state Department of Education personnel. The white paper presents the group's consensus on the essential components of a statewide educational infrastructure to support the management of students with mTBI. The nature and recovery process of mTBI are briefly described specifically with respect to its effects on school learning and performance. State and local policy considerations are then emphasized to promote implementation of a consistent process. Five key components to building a statewide infrastructure for students with mTBI are described including (1) definition and training of the interdisciplinary school team, (2) professional development of the school and medical communities, (3) identification, assessment, and progress monitoring protocols, (4) a flexible set of intervention strategies to accommodate students' recovery needs, and (5) systematized protocols for active communication among medical, school, and family team members. The need for a research to guide effective program implementation is stressed. This guiding framework strives to assist the development of support structures for recovering students with mTBI to optimize academic outcomes. Until more evidence is available on academic accommodations and other school-based supports, educational systems should follow current best practice guidelines.

  17. Statewide screening for low-level radioactive waste shallow land burial sites

    International Nuclear Information System (INIS)

    Staub, W.P.; Cannon, J.B.; Stratton, L.E.

    1984-01-01

    A methodology was developed for statewide low-level waste site screening based on NRC site selection criteria. The methodology and criteria were tested in Tennessee to determine their effectiveness in narrowing the choice of sites for more intensive localized site screening. The statewide screening methodology entailed two steps. The first step was to select one or more physiographic provinces wherein sites meeting the criteria were most likely to be found. The second step was to select one or more suitable outcrop bands from within the most favorable physiographic provinces. These selections were based entirely on examination of existing literature and maps at scales no larger than 1:250,000. The statewide screening project identified only one suitable physiographic province (the Mississippi Embayment region) and one favorable outcrop band (the Coon Creek Formation) within a three county area of western Tennessee. Ground water monitoring and predictability proved to be the most difficult criterion to meet. This criterion alone eliminated other outcrop bands in the Mississippi Embayment as well as the Eastern Highland Rim and Western Highland Rim physiographic provinces. Other provinces failed to meet several screening criteria. 3 references, 3 figures, 1 table

  18. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  19. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  20. Genome Structural Diversity among 31 Bordetella pertussis Isolates from Two Recent U.S. Whooping Cough Statewide Epidemics.

    Science.gov (United States)

    Bowden, Katherine E; Weigand, Michael R; Peng, Yanhui; Cassiday, Pamela K; Sammons, Scott; Knipe, Kristen; Rowe, Lori A; Loparev, Vladimir; Sheth, Mili; Weening, Keeley; Tondella, M Lucia; Williams, Margaret M

    2016-01-01

    During 2010 and 2012, California and Vermont, respectively, experienced statewide epidemics of pertussis with differences seen in the demographic affected, case clinical presentation, and molecular epidemiology of the circulating strains. To overcome limitations of the current molecular typing methods for pertussis, we utilized whole-genome sequencing to gain a broader understanding of how current circulating strains are causing large epidemics. Through the use of combined next-generation sequencing technologies, this study compared de novo, single-contig genome assemblies from 31 out of 33 Bordetella pertussis isolates collected during two separate pertussis statewide epidemics and 2 resequenced vaccine strains. Final genome architecture assemblies were verified with whole-genome optical mapping. Sixteen distinct genome rearrangement profiles were observed in epidemic isolate genomes, all of which were distinct from the genome structures of the two resequenced vaccine strains. These rearrangements appear to be mediated by repetitive sequence elements, such as high-copy-number mobile genetic elements and rRNA operons. Additionally, novel and previously identified single nucleotide polymorphisms were detected in 10 virulence-related genes in the epidemic isolates. Whole-genome variation analysis identified state-specific variants, and coding regions bearing nonsynonymous mutations were classified into functional annotated orthologous groups. Comprehensive studies on whole genomes are needed to understand the resurgence of pertussis and develop novel tools to better characterize the molecular epidemiology of evolving B. pertussis populations. IMPORTANCE Pertussis, or whooping cough, is the most poorly controlled vaccine-preventable bacterial disease in the United States, which has experienced a resurgence for more than a decade. Once viewed as a monomorphic pathogen, B. pertussis strains circulating during epidemics exhibit diversity visible on a genome structural

  1. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  2. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  3. Working Around Cosmic Variance: Remote Quadrupole Measurements of the CMB

    Science.gov (United States)

    Adil, Arsalan; Bunn, Emory

    2018-01-01

    Anisotropies in the CMB maps continue to revolutionize our understanding of the Cosmos. However, the statistical interpretation of these anisotropies is tainted with a posteriori statistics. The problem is particularly emphasized for lower order multipoles, i.e. in the cosmic variance regime of the power spectrum. Naturally, the solution lies in acquiring a new data set – a rather difficult task given the sample size of the Universe.The CMB temperature, in theory, depends on: the direction of photon propagation, the time at which the photons are observed, and the observer’s location in space. In existing CMB data, only the first parameter varies. However, as first pointed out by Kamionkowski and Loeb, a solution lies in making the so-called “Remote Quadrupole Measurements” by analyzing the secondary polarization produced by incoming CMB photons via the Sunyaev-Zel’dovich (SZ) effect. These observations allow us to measure the projected CMB quadrupole at the location and look-back time of a galaxy cluster.At low redshifts, the remote quadrupole is strongly correlated to the CMB anisotropy from our last scattering surface. We provide here a formalism for computing the covariance and relation matrices for both the two-point correlation function on the last scattering surface of a galaxy cluster and the cross correlation of the remote quadrupole with the local CMB. We then calculate these matrices based on a fiducial model and a non-standard model that suppresses power at large angles for ~104 clusters up to z=2. We anticipate to make a priori predictions of the differences between our expectations for the standard and non-standard models. Such an analysis is timely in the wake of the CMB S4 era which will provide us with an extensive SZ cluster catalogue.

  4. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  5. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  6. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  7. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  8. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  9. Variance squeezing and entanglement of the XX central spin model

    International Nuclear Information System (INIS)

    El-Orany, Faisal A A; Abdalla, M Sebawe

    2011-01-01

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  10. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  11. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  12. Effect Of A “No Superuser Opioid Prescription” Policy On ED Visits And Statewide Opioid Prescription

    Directory of Open Access Journals (Sweden)

    Zachary P. Kahler

    2017-07-01

    Full Text Available Introduction: The U.S. opioid epidemic has highlighted the need to identify patients at risk of opioid abuse and overdose. We initiated a novel emergency department- (ED based interventional protocol to transition our superuser patients from the ED to an outpatient chronic pain program. The objective was to evaluate the protocol’s effect on superusers’ annual ED visits. Secondary outcomes included a quantitative evaluation of statewide opioid prescriptions for these patients, unique prescribers of controlled substances, and ancillary testing. Methods: Patients were referred to the program with the following inclusion criteria: ≥ 6 visits per year to the ED; at least one visit identified by the attending physician as primarily driven by opioid-seeking behavior; and a review by a committee comprising ED administration and case management. Patients were referred to a pain management clinic and informed that they would no longer receive opioid prescriptions from visits to the ED for chronic pain complaints. Electronic medical record (EMR alerts notified ED providers of the patient’s referral at subsequent visits. We analyzed one year of data pre- and post-referral. Results: A total of 243 patients had one year of data post-referral for analysis. Median annual ED visits decreased from 14 to 4 (58% decrease, 95% CI [50 to 66]. We also found statistically significant decreases for these patients’ state prescription drug monitoring program (PDMP opioid prescriptions (21 to 13, total unique controlled-substance prescribers (11 to 7, computed tomography imaging (2 to 0, radiographs (5 to 1, electrocardiograms (12 to 4, and labs run (47 to 13. Conclusion: This program and the EMR-based alerts were successful at decreasing local ED visits, annual opioid prescriptions, and hospital resource allocation for this population of patients. There is no evidence that these patients diverted their visits to neighboring EDs after being informed that they

  13. Validation of a risk stratification tool for fall-related injury in a state-wide cohort.

    Science.gov (United States)

    McCoy, Thomas H; Castro, Victor M; Cagan, Andrew; Roberson, Ashlee M; Perlis, Roy H

    2017-02-06

    A major preventable contributor to healthcare costs among older individuals is fall-related injury. We sought to validate a tool to stratify such risk based on readily available clinical data, including projected medication adverse effects, using state-wide medical claims data. Sociodemographic and clinical features were drawn from health claims paid in the state of Massachusetts for individuals aged 35-65 with a hospital admission for a period spanning January-December 2012. Previously developed logistic regression models of hospital readmission for fall-related injury were refit in a testing set including a randomly selected 70% of individuals, and examined in a training set comprised of the remaining 30%. Medications at admission were summarised based on reported adverse effect frequencies in published medication labelling. The Massachusetts health system. A total of 68 764 hospitalised individuals aged 35-65 years. Hospital readmission for fall-related injury defined by claims code. A total of 2052 individuals (3.0%) were hospitalised for fall-related injury within 90 days of discharge, and 3391 (4.9%) within 180 days. After recalibrating the model in a training data set comprised of 48 136 individuals (70%), model discrimination in the remaining 30% test set yielded an area under the receiver operating characteristic curve (AUC) of 0.74 (95% CI 0.72 to 0.76). AUCs were similar across age decades (0.71 to 0.78) and sex (0.72 male, 0.76 female), and across most common diagnostic categories other than psychiatry. For individuals in the highest risk quartile, 11.4% experienced fall within 180 days versus 1.2% in the lowest risk quartile; 57.6% of falls occurred in the highest risk quartile. This analysis of state-wide claims data demonstrates the feasibility of predicting fall-related injury requiring hospitalisation using readily available sociodemographic and clinical details. This translatable approach to stratification allows for identification of

  14. Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products.

    Science.gov (United States)

    Shorten, P R; Membré, J-M; Pleasants, A B; Kubaczka, M; Soboleva, T K

    2004-06-01

    The objective of this paper was to estimate and partition the variability in the microbial growth model parameters describing the growth of Erwinia carotovora on pasteurised and non-pasteurised vegetable juice from laboratory experiments performed under different temperature-varying conditions. We partitioned the model parameter variance and covariance components into effects due to temperature profile and replicate using a maximum likelihood technique. Temperature profile and replicate were treated as random effects and the food substrate was treated as a fixed effect. The replicate variance component was small indicating a high level of control in this experiment. Our analysis of the combined E. carotovora growth data sets used the Baranyi primary microbial growth model along with the Ratkowsky secondary growth model. The variability in the microbial growth parameters estimated from these microbial growth experiments is essential for predicting the mean and variance through time of the E. carotovora population size in a product supply chain and is the basis for microbiological risk assessment and food product shelf-life estimation. The variance partitioning made here also assists in the management of optimal product distribution networks by identifying elements of the supply chain contributing most to product variability. Copyright 2003 Elsevier B.V.

  15. Isolating Trait and Method Variance in the Measurement of Callous and Unemotional Traits.

    Science.gov (United States)

    Paiva-Salisbury, Melissa L; Gill, Andrew D; Stickle, Timothy R

    2017-09-01

    To examine hypothesized influence of method variance from negatively keyed items in measurement of callous-unemotional (CU) traits, nine a priori confirmatory factor analysis model comparisons of the Inventory of Callous-Unemotional Traits were evaluated on multiple fit indices and theoretical coherence. Tested models included a unidimensional model, a three-factor model, a three-bifactor model, an item response theory-shortened model, two item-parceled models, and three correlated trait-correlated method minus one models (unidimensional, correlated three-factor, and bifactor). Data were self-reports of 234 adolescents (191 juvenile offenders, 43 high school students; 63% male; ages 11-17 years). Consistent with hypotheses, models accounting for method variance substantially improved fit to the data. Additionally, bifactor models with a general CU factor better fit the data compared with correlated factor models, suggesting a general CU factor is important to understanding the construct of CU traits. Future Inventory of Callous-Unemotional Traits analyses should account for method variance from item keying and response bias to isolate trait variance.

  16. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    Science.gov (United States)

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  17. Quantitative genetic variance and multivariate clines in the Ivyleaf morning glory, Ipomoea hederacea.

    Science.gov (United States)

    Stock, Amanda J; Campitelli, Brandon E; Stinchcombe, John R

    2014-08-19

    Clinal variation is commonly interpreted as evidence of adaptive differentiation, although clines can also be produced by stochastic forces. Understanding whether clines are adaptive therefore requires comparing clinal variation to background patterns of genetic differentiation at presumably neutral markers. Although this approach has frequently been applied to single traits at a time, we have comparatively fewer examples of how multiple correlated traits vary clinally. Here, we characterize multivariate clines in the Ivyleaf morning glory, examining how suites of traits vary with latitude, with the goal of testing for divergence in trait means that would indicate past evolutionary responses. We couple this with analysis of genetic variance in clinally varying traits in 20 populations to test whether past evolutionary responses have depleted genetic variance, or whether genetic variance declines approaching the range margin. We find evidence of clinal differentiation in five quantitative traits, with little evidence of isolation by distance at neutral loci that would suggest non-adaptive or stochastic mechanisms. Within and across populations, the traits that contribute most to population differentiation and clinal trends in the multivariate phenotype are genetically variable as well, suggesting that a lack of genetic variance will not cause absolute evolutionary constraints. Our data are broadly consistent theoretical predictions of polygenic clines in response to shallow environmental gradients. Ecologically, our results are consistent with past findings of natural selection on flowering phenology, presumably due to season-length variation across the range. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  18. California drug courts: outcomes, costs and promising practices: an overview of Phase II in a statewide study.

    Science.gov (United States)

    Carey, Shannon M; Finigan, Michael; Crumpton, Dave; Waller, Mark

    2006-11-01

    The rapid expansion of drug courts in California and the state's uncertain fiscal climate highlighted the need for definitive cost information on drug court programs. This study focused on creating a research design that can be utilized for statewide and national cost-assessment of drug courts by conducting in-depth case studies of the costs and benefits in nine adult drug courts in California. A Transactional Institutional Costs Analysis (TICA) approach was used, allowing researchers to calculate costs based on every individual's transactions within the drug court or the traditional criminal justice system. This methodology also allows the calculation of costs and benefits by agency (e.g., Public Defender's office, court, District Attorney). Results in the nine sites showed that the majority of agencies save money in processing an offender though drug court. Overall, for these nine study sites, participation in drug court saved the state over 9 million dollars in criminal justice and treatment costs due to lower recidivism in drug court participants. Based on the lessons learned in Phases I and II, Phase III of this study focuses on the creation of a web-based drug court cost self-evaluation tool (DC-CSET) that drug courts can use to determine their own costs and benefits.

  19. Stepping up to the challenge: the development, implementation, and assessment of a statewide, regional, leadership program for school nutrition directors.

    Science.gov (United States)

    Bergman, Jacqueline J; Briggs, Marilyn M; Beall, Deborah L; Curwood, Sandy; Gray, Pilar; Soiseth, Scott; Taylor, Rodney K; Zidenberg-Cherr, Sheri

    2015-01-01

    A statewide professional development program was developed and implemented throughout California for school nutrition directors with the goal of creating healthy school environments and regional networks for collaboration and healthy school environment sustainability. Needs of school nutrition directors were identified through a needs assessment questionnaire. Results of the needs assessment questionnaire (n = 256) identified (a) planning cost-effective menus; (b) reducing calories, sodium, saturated fat, and trans fat in menus; and (c) using U.S. Department of Agriculture foods cost-effectively as the most useful topics. Highest rated topics informed the content of the professional development program. A post-professional development questionnaire identified key "insights, inspirations, and strategies" as (a) marketing of school foods program, (b) expansion of salad bars, and (c) collaboration with community partners. A 6-month follow-up questionnaire identified that 86% of participants made progress toward implementing at least one of their five insights, inspirations, and strategies in their school districts. Most common areas that were implemented were marketing and branding (32%), revamping salad bars (18%), and motivating staff (16%). School and Community Actions for Nutrition survey analysis showed a significant increase in the use of marketing methods in school nutrition programs from baseline to 6-month post-program implementation (p = .024). © 2014 Society for Public Health Education.

  20. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  1. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  2. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  3. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  4. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  5. Genetic variance components for residual feed intake and feed ...

    African Journals Online (AJOL)

    Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...

  6. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  7. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  8. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  9. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  10. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these...

  11. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  12. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  13. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  14. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  15. Asymptotics of variance of the lattice point count

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    2008-01-01

    Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008

  16. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...

  17. An observation on the variance of a predicted response in ...

    African Journals Online (AJOL)

    ... these properties and computational simplicity. To avoid over fitting, along with the obvious advantage of having a simpler equation, it is shown that the addition of a variable to a regression equation does not reduce the variance of a predicted response. Key words: Linear regression; Partitioned matrix; Predicted response ...

  18. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  19. The Threat of Common Method Variance Bias to Theory Building

    Science.gov (United States)

    Reio, Thomas G., Jr.

    2010-01-01

    The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…

  20. 40 CFR 268.44 - Variance from a treatment standard.

    Science.gov (United States)

    2010-07-01

    ... complete petition may be requested as needed to send to affected states and Regional Offices. (e) The... provide an opportunity for public comment. The final decision on a variance from a treatment standard will... than) the concentrations necessary to minimize short- and long-term threats to human health and the...