WorldWideScience

Sample records for variance model based

  1. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  2. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  3. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  4. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  5. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  6. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    Science.gov (United States)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  8. A new media optimizer based on the mean-variance model

    Directory of Open Access Journals (Sweden)

    Pedro Jesus Fernandez

    2007-01-01

    Full Text Available In the financial markets, there is a well established portfolio optimization model called generalized mean-variance model (or generalized Markowitz model. This model considers that a typical investor, while expecting returns to be high, also expects returns to be as certain as possible. In this paper we introduce a new media optimization system based on the mean-variance model, a novel approach in media planning. After presenting the model in its full generality, we discuss possible advantages of the mean-variance paradigm, such as its flexibility in modeling the optimization problem, its ability of dealing with many media performance indices - satisfying most of the media plan needs - and, most important, the property of diversifying the media portfolios in a natural way, without the need to set up ad hoc constraints to enforce diversification.No mercado financeiro, existem modelos de otimização de portfólios já bem estabelecidos, denominados modelos de média-variância generalizados, ou modelos de Markowitz generalizados. Este modelo considera que um investidor típico, enquanto espera altos retornos, espera também que estes retornos sejam tão certos quanto possível. Neste artigo introduzimos um novo sistema otimizador de mídia baseado no modelo de média-variância, uma abordagem inovadora na área de planejamento de mídia. Após apresentar o modelo em sua máxima generalidade, discutimos possíveis vantagens do paradigma de média-variância, como sua flexibilidade na modelagem do problema de otimização, sua habilidade de lidar com vários índices de performance - satisfazendo a maioria dos requisitos de planejamento - e, o mais importante, a propriedade de diversificar os portfólios de mídia de uma forma natural, sem a necessidade de estipular restrições ad hoc para forçar a diversificação.

  9. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  10. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  11. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  12. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  13. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  14. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  15. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  16. A new interpretation and validation of variance based importance measures for models with correlated inputs

    Science.gov (United States)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  17. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  18. Structural changes and out-of-sample prediction of realized range-based variance in the stock market

    Science.gov (United States)

    Gong, Xu; Lin, Boqiang

    2018-03-01

    This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.

  19. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  20. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  1. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  2. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  3. Detection of Outliers in Panel Data of Intervention Effects Model Based on Variance of Remainder Disturbance

    Directory of Open Access Journals (Sweden)

    Yanfang Lyu

    2015-01-01

    Full Text Available The presence of outliers can result in seriously biased parameter estimates. In order to detect outliers in panel data models, this paper presents a modeling method to assess the intervention effects based on the variance of remainder disturbance using an arbitrary strictly positive twice continuously differentiable function. This paper also provides a Lagrange Multiplier (LM approach to detect and identify a general type of outlier. Furthermore, fixed effects models and random effects models are discussed to identify outliers and the corresponding LM test statistics are given. The LM test statistics for an individual-based model to detect outliers are given as a particular case. Finally, this paper performs an application using panel data and explains the advantages of the proposed method.

  4. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  5. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  6. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  7. Models of Postural Control: Shared Variance in Joint and COM Motions.

    Directory of Open Access Journals (Sweden)

    Melissa C Kilby

    Full Text Available This paper investigated the organization of the postural control system in human upright stance. To this aim the shared variance between joint and 3D total body center of mass (COM motions was analyzed using multivariate canonical correlation analysis (CCA. The CCA was performed as a function of established models of postural control that varied in their joint degrees of freedom (DOF, namely, an inverted pendulum ankle model (2DOF, ankle-hip model (4DOF, ankle-knee-hip model (5DOF, and ankle-knee-hip-neck model (7DOF. Healthy young adults performed various postural tasks (two-leg and one-leg quiet stances, voluntary AP and ML sway on a foam and rigid surface of support. Based on CCA model selection procedures, the amount of shared variance between joint and 3D COM motions and the cross-loading patterns we provide direct evidence of the contribution of multi-DOF postural control mechanisms to human balance. The direct model fitting of CCA showed that incrementing the DOFs in the model through to 7DOF was associated with progressively enhanced shared variance with COM motion. In the 7DOF model, the first canonical function revealed more active involvement of all joints during more challenging one leg stances and dynamic posture tasks. Furthermore, the shared variance was enhanced during the dynamic posture conditions, consistent with a reduction of dimension. This set of outcomes shows directly the degeneracy of multivariate joint regulation in postural control that is influenced by stance and surface of support conditions.

  8. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    Science.gov (United States)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    discharge is more affected by parameters from the whole upstream drainage area. Understanding model output variance behavior will have a direct impact on the design and performance of the ensemble-based data assimilation platform, for which uncertainties are also modeled by variances. It will help to select more objectively RRM parameters to correct.

  9. Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling

    Directory of Open Access Journals (Sweden)

    Petr Novák

    2012-03-01

    Full Text Available This study is aimed at variance computation techniques for estimates of population characteristics based on survey sampling and imputation. We use the superpopulation regression model, which means that the target variable values for each statistical unit are treated as random realizations of a linear regression model with weighted variance. We focus on regression models with one auxiliary variable and no intercept, which have many applications and straightforward interpretation in business statistics. Furthermore, we deal with caseswhere the estimates are not independent and thus the covariance must be computed. We also consider chained regression models with auxiliary variables as random variables instead of constants.

  10. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  11. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    Science.gov (United States)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  12. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  13. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  14. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  15. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  16. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    Science.gov (United States)

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  17. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  18. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Pindoriya, N.M.; Singh, S.N. [Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Singh, S.K. [Indian Institute of Management Lucknow, Lucknow 226013 (India)

    2010-10-15

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  19. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    International Nuclear Information System (INIS)

    Pindoriya, N.M.; Singh, S.N.; Singh, S.K.

    2010-01-01

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  20. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  1. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  2. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  3. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  4. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  5. A more realistic estimate of the variances and systematic errors in spherical harmonic geomagnetic field models

    DEFF Research Database (Denmark)

    Lowes, F.J.; Olsen, Nils

    2004-01-01

    Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...

  6. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  7. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    Science.gov (United States)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  8. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  9. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  10. On estimation of the noise variance in high-dimensional linear models

    OpenAIRE

    Golubev, Yuri; Krymova, Ekaterina

    2017-01-01

    We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).

  11. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  12. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Yu, Zuwei

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets

  13. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Zuwei Yu

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  14. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zuwei Yu [Purdue University, West Lafayette, IN (United States). Indiana State Utility Forecasting Group and School of Industrial Engineering

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  15. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zuwei [Indiana State Utility Forecasting Group and School of Industrial Engineering, Purdue University, Room 334, 1293 A.A. Potter, West Lafayette, IN 47907 (United States)

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets.

  16. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...

  17. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  18. Modelling Changes in the Unconditional Variance of Long Stock Return Series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011...... show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all...... horizons for a subset of the long return series....

  19. Modelling changes in the unconditional variance of long stock return series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    2014-01-01

    In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long daily return series. For this purpose we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta...... that the apparent long memory property in volatility may be interpreted as changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecasting accuracy of the new model over the GJR-GARCH model at all horizons for eight...... subsets of the long return series....

  20. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  1. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  2. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  3. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  4. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    Science.gov (United States)

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  5. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  6. Multilevel models for multiple-baseline data: modeling across-participant variation in autocorrelation and residual variance.

    Science.gov (United States)

    Baek, Eun Kyeng; Ferron, John M

    2013-03-01

    Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.

  7. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  8. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  9. Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Anna A. Igolkina

    2018-06-01

    Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology

  10. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  11. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  12. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  13. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  14. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2002-05-01

    Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.

  15. Tip displacement variance of manipulator to simultaneous horizontal and vertical stochastic base excitations

    International Nuclear Information System (INIS)

    Rahi, A.; Bahrami, M.; Rastegar, J.

    2002-01-01

    The tip displacement variance of an articulated robotic manipulator to simultaneous horizontal and vertical stochastic base excitation is studied. The dynamic equations for an n-links manipulator subjected to both horizontal and vertical stochastic excitations are derived by Lagrangian method and decoupled for small displacement of joints. The dynamic response covariance of the manipulator links is computed in the coordinate frame attached to the base and then the principal variance of tip displacement is determined. Finally, simulation for a two-link planner robotic manipulator under base excitation is developed. Then sensitivity of the principal variance of tip displacement and tip velocity to manipulator configuration, damping, excitation parameters and manipulator links length are investigated

  16. Mixed model with spatial variance-covariance structure for accommodating of local stationary trend and its influence on multi-environmental crop variety trial assessment

    Energy Technology Data Exchange (ETDEWEB)

    Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.

    2014-06-01

    The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)

  17. A Fay-Herriot Model with Different Random Effect Variances

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

    2011-01-01

    Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

  18. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  19. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  20. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  1. DIFFERENCES BETWEEN MEAN-VARIANCE AND MEAN-CVAR PORTFOLIO OPTIMIZATION MODELS

    Directory of Open Access Journals (Sweden)

    Panna Miskolczi

    2016-07-01

    Full Text Available Everybody heard already that one should not expect high returns without high risk, or one should not expect safety without low returns. The goal of portfolio theory is to find the balance between maximizing the return and minimizing the risk. To do so we have to first understand and measure the risk. Naturally a good risk measure has to satisfy several properties - in theory and in practise. Markowitz suggested to use the variance as a risk measure in portfolio theory. This led to the so called mean-variance model - for which Markowitz received the Nobel Prize in 1990. The model has been criticized because it is well suited for elliptical distributions but it may lead to incorrect conclusions in the case of non-elliptical distributions. Since then many risk measures have been introduced, of which the Value at Risk (VaR is the most widely used in the recent years. Despite of the widespread use of the Value at Risk there are some fundamental problems with it. It does not satisfy the subadditivity property and it ignores the severity of losses in the far tail of the profit-and-loss (P&L distribution. Moreover, its non-convexity makes VaR impossible to use in optimization problems. To come over these issues the Expected Shortfall (ES as a coherent risk measure was developed. Expected Shortfall is also called Conditional Value at Risk (CVaR. Compared to Value at Risk, ES is more sensitive to the tail behaviour of the P&L distribution function. In the first part of the paper I state the definition of these three risk measures. In the second part I deal with my main question: What is happening if we replace the variance with the Expected Shortfall in the portfolio optimization process. Do we have different optimal portfolios as a solution? And thus, does the solution suggests to decide differently in the two cases? To answer to these questions I analyse seven Hungarian stock exchange companies. First I use the mean-variance portfolio optimization model

  2. Model Optimisasi Portofolio Investasi Mean-Variance Tanpa dan Dengan Aset Bebas Risiko pada Saham Idx30

    Directory of Open Access Journals (Sweden)

    Basuki Basuki

    2017-07-01

    Full Text Available Dalam paper ini, model optimisasi portofolio investasi Mean-Variance tanpa aset bebas risiko, atau disebut model dasar dari Markowitz telah dikaji untuk mendapatkan portofolio optimum.Berdasarkan model dasar dari Markowitz, kemudian dilakukan studi lebih lanjut pada model Mean-Variance dengan aset bebas risiko. Selanjutnya, kedua model tersebut digunakan untuk menganalisis optimisasi portofolio investasi pada beberapa saham IDX30. Dalam paper ini diasumsikan bahwa proporsi sebesar 10% diinvestasikan pada aset bebas risiko, berupa deposito yang memberikan return sebesar 7% per tahun. Berdasarkan hasil analisis optimisasi portofolio investasi pada lima saham yang dipilih didapatkan grafik permukaan efisien dari optimisasi portofolio Mean-Variance dengan aset bebas risiko, berada lebih tinggi dibandingkan optimisasi portofolio Mean-Variance tanpa aset bebas risiko. Dalam hal ini menunjukkan bahwa portofolio investasi kombinasi dari aset bebas risiko dan aset tanpa bebas risiko, lebih menguntungkan dibandingkan portofolio investasi yang hanya pada aset tanpa bebas risiko.

  3. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  4. Diffusion-Based Trajectory Observers with Variance Constraints

    DEFF Research Database (Denmark)

    Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo

    Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...

  5. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  6. Variance-based selection may explain general mating patterns in social insects.

    Science.gov (United States)

    Rueppell, Olav; Johnson, Nels; Rychtár, Jan

    2008-06-23

    Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.

  7. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  8. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  9. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  10. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  11. Variance squeezing and entanglement of the XX central spin model

    International Nuclear Information System (INIS)

    El-Orany, Faisal A A; Abdalla, M Sebawe

    2011-01-01

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  12. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  13. Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem

    Directory of Open Access Journals (Sweden)

    V. Charles

    2011-01-01

    Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.

  14. A family-based joint test for mean and variance heterogeneity for quantitative traits.

    Science.gov (United States)

    Cao, Ying; Maxwell, Taylor J; Wei, Peng

    2015-01-01

    Traditional quantitative trait locus (QTL) analysis focuses on identifying loci associated with mean heterogeneity. Recent research has discovered loci associated with phenotype variance heterogeneity (vQTL), which is important in studying genetic association with complex traits, especially for identifying gene-gene and gene-environment interactions. While several tests have been proposed to detect vQTL for unrelated individuals, there are no tests for related individuals, commonly seen in family-based genetic studies. Here we introduce a likelihood ratio test (LRT) for identifying mean and variance heterogeneity simultaneously or for either effect alone, adjusting for covariates and family relatedness using a linear mixed effect model approach. The LRT test statistic for normally distributed quantitative traits approximately follows χ(2)-distributions. To correct for inflated Type I error for non-normally distributed quantitative traits, we propose a parametric bootstrap-based LRT that removes the best linear unbiased prediction (BLUP) of family random effect. Simulation studies show that our family-based test controls Type I error and has good power, while Type I error inflation is observed when family relatedness is ignored. We demonstrate the utility and efficiency gains of the proposed method using data from the Framingham Heart Study to detect loci associated with body mass index (BMI) variability. © 2014 John Wiley & Sons Ltd/University College London.

  15. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  16. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  17. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  18. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  19. Investor preferences for oil spot and futures based on mean-variance and stochastic dominance

    NARCIS (Netherlands)

    H.H. Lean (Hooi Hooi); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractThis paper examines investor preferences for oil spot and futures based on mean-variance (MV) and stochastic dominance (SD). The mean-variance criterion cannot distinct the preferences of spot and market whereas SD tests leads to the conclusion that spot dominates futures in the downside

  20. Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.

    Science.gov (United States)

    Pressley, Joanna; Troyer, Todd W

    2009-07-01

    In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.

  1. A random variance model for detection of differential gene expression in small microarray experiments.

    Science.gov (United States)

    Wright, George W; Simon, Richard M

    2003-12-12

    Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf

  2. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Science.gov (United States)

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  3. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Directory of Open Access Journals (Sweden)

    Liyun Zhuang

    2017-01-01

    Full Text Available This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE, which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE. Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  4. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance.

    Science.gov (United States)

    Zhuang, Liyun; Guan, Yepeng

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  5. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  6. Relative variance of the mean-squared pressure in multimode media: rehabilitating former approaches.

    Science.gov (United States)

    Monsef, Florian; Cozza, Andrea; Rodrigues, Dominique; Cellard, Patrick; Durocher, Jean-Noel

    2014-11-01

    The commonly accepted model for the relative variance of transmission functions in room acoustics, derived by Weaver, aims at including the effects of correlation between eigenfrequencies. This model is based on an analytical expression of the relative variance derived by means of an approximated correlation function. The relevance of the approximation used for modeling such correlation is questioned here. Weaver's model was motivated by the fact that earlier models derived by Davy and Lyon assumed independent eigenfrequencies and led to an overestimation with respect to relative variances found in practice. It is shown here that this overestimation is due to an inadequate truncation of the modal expansion, and to an improper choice of the frequency range over which ensemble averages of the eigenfrequencies is defined. An alternative definition is proposed, settling the inconsistency; predicted relative variances are found to be in good agreement with experimental data. These results rehabilitate former approaches that were based on independence assumptions between eigenfrequencies. Some former studies showed that simpler correlation models could be used to predict the statistics of some field-related physical quantity at low modal overlap. The present work confirms that this is also the case when dealing with transmission functions.

  7. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  8. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  9. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  10. Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2016-01-01

    htmlabstractWe propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then,

  11. Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model

    International Nuclear Information System (INIS)

    Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.

    2012-01-01

    This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.

  12. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  13. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  14. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  15. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  16. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  17. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  18. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  19. Regional income inequality model based on theil index decomposition and weighted variance coeficient

    Science.gov (United States)

    Sitepu, H. R.; Darnius, O.; Tambunan, W. N.

    2018-03-01

    Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.

  20. Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models

    DEFF Research Database (Denmark)

    Lanne, Markku; Nyberg, Henri

    We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...

  1. Asymmetries in conditional mean variance: modelling stock returns by asMA-asQGARCH

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.

    2004-01-01

    We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad

  2. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  3. Model determination in a case of heterogeneity of variance using sampling techniques.

    Science.gov (United States)

    Varona, L; Moreno, C; Garcia-Cortes, L A; Altarriba, J

    1997-01-12

    A sampling determination procedure has been described in a case of heterogeneity of variance. The procedure makes use of the predictive distributions of each data given the rest of the data and the structure of the assumed model. The computation of these predictive distributions is carried out using a Gibbs Sampling procedure. The final criterion to compare between models is the Mean Square Error between the expectation of predictive distributions and real data. The procedure has been applied to a data set of weight at 210 days in the Spanish Pirenaica beef cattle breed. Three proposed models have been compared: (a) Single Trait Animal Model; (b) Heterogeneous Variance Animal Model; and (c) Multiple Trait Animal Model. After applying the procedure, the most adjusted model was the Heterogeneous Variance Animal Model. This result is probably due to a compromise between the complexity of the model and the amount of available information. The estimated heritabilities under the preferred model have been 0.489 ± 0.076 for males and 0.331 ± 0.082 for females. RESUMEN: Contraste de modelos en un caso de heterogeneidad de varianzas usando métodos de muestreo Se ha descrito un método de contraste de modelos mediante técnicas de muestreo en un caso de heterogeneidad de varianza entre sexos. El procedimiento utiliza las distribucviones predictivas de cada dato, dado el resto de datos y la estructura del modelo. El criterio para coparar modelos es el error cuadrático medio entre la esperanza de las distribuciones predictivas y los datos reales. El procedimiento se ha aplicado en datos de peso a los 210 días en la raza bovina Pirenaica. Se han propuesto tres posibles modelos: (a) Modelo Animal Unicaracter; (b) Modelo Animal con Varianzas Heterogéneas; (c) Modelo Animal Multicaracter. El modelo mejor ajustado fue el Modelo Animal con Varianzas Heterogéneas. Este resultado es probablemente debido a un compromiso entre la complejidad del modelo y la cantidad de datos

  4. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  5. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  6. Asymmetries in conditional mean and variance: Modelling stock returns by asMA-asQGARCH

    NARCIS (Netherlands)

    Brännäs, K.; de Gooijer, J.G.

    2000-01-01

    The asymmetric moving average model (asMA) is extended to allow for asymmetric quadratic conditional heteroskedasticity (asQGARCH). The asymmetric parametrization of the condi- tional variance encompasses the quadratic GARCH model of Sentana (1995). We introduce a framework for testing asymmetries

  7. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  8. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  9. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  10. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  11. The modified Black-Scholes model via constant elasticity of variance for stock options valuation

    Science.gov (United States)

    Edeki, S. O.; Owoloko, E. A.; Ugbebor, O. O.

    2016-02-01

    In this paper, the classical Black-Scholes option pricing model is visited. We present a modified version of the Black-Scholes model via the application of the constant elasticity of variance model (CEVM); in this case, the volatility of the stock price is shown to be a non-constant function unlike the assumption of the classical Black-Scholes model.

  12. Adjoint-based global variance reduction approach for reactor analysis problems

    International Nuclear Information System (INIS)

    Zhang, Qiong; Abdel-Khalik, Hany S.

    2011-01-01

    A new variant of a hybrid Monte Carlo-Deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted by the Subspace approach, optimizes the selection of the weight windows for reactor analysis problems where detailed properties of all fuel assemblies are required everywhere in the reactor core. Like the FW-CADIS approach, the Subspace approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. In contrast to FW-CADIS, the Subspace approach identifies the correlations between weight window maps to minimize the computational time required for global variance reduction, i.e., when the solution is required everywhere in the phase space. The correlations are employed to reduce the number of maps required to achieve the same level of variance reduction that would be obtained with single-response maps. Numerical experiments, serving as proof of principle, are presented to compare the Subspace and FW-CADIS approaches in terms of the global reduction in standard deviation. (author)

  13. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  14. A log-sinh transformation for data normalization and variance stabilization

    Science.gov (United States)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  15. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    Science.gov (United States)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  16. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    International Nuclear Information System (INIS)

    Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.

    2015-01-01

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method

  17. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  18. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  19. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  20. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  1. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  2. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  3. Estimation of additive and dominance variance for reproductive traits from different models in Duroc purebred

    Directory of Open Access Journals (Sweden)

    Talerngsak Angkuraseranee

    2010-05-01

    Full Text Available The additive and dominance genetic variances of 5,801 Duroc reproductive and growth records were estimated usingBULPF90 PC-PACK. Estimates were obtained for number born alive (NBA, birth weight (BW, number weaned (NW, andweaning weight (WW. Data were analyzed using two mixed model equations. The first model included fixed effects andrandom effects identifying inbreeding depression, additive gene effect and permanent environments effects. The secondmodel was similar to the first model, but included the dominance genotypic effect. Heritability estimates of NBA, BW, NWand WW from the two models were 0.1558/0.1716, 0.1616/0.1737, 0.0372/0.0874 and 0.1584/0.1516 respectively. Proportionsof dominance effect to total phenotypic variance from the dominance model were 0.1024, 0.1625, 0.0470, and 0.1536 for NBA,BW, NW and WW respectively. Dominance effects were found to have sizable influence on the litter size traits analyzed.Therefore, genetic evaluation with the dominance model (Model 2 is found more appropriate than the animal model (Model 1.

  4. MMSE-based algorithm for joint signal detection, channel and noise variance estimation for OFDM systems

    CERN Document Server

    Savaux, Vincent

    2014-01-01

    This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr

  5. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  6. A Model-Free No-arbitrage Price Bound for Variance Options

    Energy Technology Data Exchange (ETDEWEB)

    Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr [Ecole Polytechnique, INRIA-Saclay (France); Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu [Ecole Polytechnique, CMAP (France)

    2013-08-01

    We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.

  7. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling

    Science.gov (United States)

    Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng

    2016-01-01

    Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974

  8. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    Science.gov (United States)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  9. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  10. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  11. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  12. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  13. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  14. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  15. On discrete stochastic processes with long-lasting time dependence in the variance

    Science.gov (United States)

    Queirós, S. M. D.

    2008-11-01

    In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.

  16. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  17. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  18. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  19. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...

  20. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  1. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  2. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  3. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  4. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  5. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  6. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  7. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  8. Removing an intersubject variance component in a general linear model improves multiway factoring of event-related spectral perturbations in group EEG studies.

    Science.gov (United States)

    Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C

    2013-03-01

    Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.

  9. Intercentre variance in patient reported outcomes is lower than objective rheumatoid arthritis activity measures

    DEFF Research Database (Denmark)

    Khan, Nasim Ahmed; Spencer, Horace Jack; Nikiphorou, Elena

    2017-01-01

    Objective: To assess intercentre variability in the ACR core set measures, DAS28 based on three variables (DAS28v3) and Routine Assessment of Patient Index Data 3 in a multinational study. Methods: Seven thousand and twenty-three patients were recruited (84 centres; 30 countries) using a standard...... built to adjust for the remaining ACR core set measure (for each ACR core set measure or each composite index), socio-demographics and medical characteristics. ANOVA and analysis of covariance models yielded similar results, and ANOVA tables were used to present variance attributable to recruiting...... centre. Results: The proportion of variances attributable to recruiting centre was lower for patient reported outcomes (PROs: pain, HAQ, patient global) compared with objective measures (joint counts, ESR, physician global) in all models. In the full model, variance in PROs attributable to recruiting...

  10. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  11. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    Science.gov (United States)

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  12. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  13. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  14. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely available...... for research purposes at http://www.bdporc.irta.es/estudis.jsp. The main feature of the program is to compute Monte Carlo estimates of marginal posterior distributions of parameters of interest. The program is quite flexible, allowing the user to fit a variety of linear models at the level of the mean...

  15. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  16. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  17. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    Science.gov (United States)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  18. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  19. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  20. Task-Based Listening Assessment and the Influence of Construct-Irrelevant Variance

    Directory of Open Access Journals (Sweden)

    Arshya Keyvanfar

    2008-02-01

    Full Text Available Task-based listening tests such as IELTS require testees to listen to some information on a CD and simultaneously answer the related items. To answer such items, testees are expected to comprehend, analyze, compare and infer pieces of information while listening to the incoming audio material. The present research attempted to investigate whether the two major characteristics of question type and consecutive/simultaneous performance have any impact on the listening performance of Iranian EFL learners. Findings indicated that participants had a significantly better performance when they tackled the tasks consecutively, and performed even better in listening MC items rather than in listening task-based items. The researchers, thus, concluded that task-based listening tests such as IELTS listening module may be under the influence of construct-irrelevant variance.

  1. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  2. A class of multi-period semi-variance portfolio for petroleum exploration and development

    Science.gov (United States)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  3. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  4. Forecasting the variance and return of Mexican financial series with symmetric GARCH models

    Directory of Open Access Journals (Sweden)

    Fátima Irina VILLALBA PADILLA

    2013-03-01

    Full Text Available The present research shows the application of the generalized autoregresive conditional heteroskedasticity models (GARCH in order to forecast the variance and return of the IPC, the EMBI, the weighted-average government funding rate, the fix exchange rate and the Mexican oil reference, as important tools for investment decisions. Forecasts in-sample and out-of-sample are performed. The covered period involves from 2005 to 2011.

  5. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  6. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  7. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  8. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  9. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  10. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  11. An Efficient SDN Load Balancing Scheme Based on Variance Analysis for Massive Mobile Users

    Directory of Open Access Journals (Sweden)

    Hong Zhong

    2015-01-01

    Full Text Available In a traditional network, server load balancing is used to satisfy the demand for high data volumes. The technique requires large capital investment while offering poor scalability and flexibility, which difficultly supports highly dynamic workload demands from massive mobile users. To solve these problems, this paper analyses the principle of software-defined networking (SDN and presents a new probabilistic method of load balancing based on variance analysis. The method can be used to dynamically manage traffic flows for supporting massive mobile users in SDN networks. The paper proposes a solution using the OpenFlow virtual switching technology instead of the traditional hardware switching technology. A SDN controller monitors data traffic of each port by means of variance analysis and provides a probability-based selection algorithm to redirect traffic dynamically with the OpenFlow technology. Compared with the existing load balancing methods which were designed to support traditional networks, this solution has lower cost, higher reliability, and greater scalability which satisfy the needs of mobile users.

  12. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  13. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  14. A structure-based model for the transport of passive scalars in homogeneous turbulent flows

    International Nuclear Information System (INIS)

    Panagiotou, C.F.; Kassinos, S.C.

    2016-01-01

    Highlights: • The Interacting Particle Representation Model (IPRM) is extended for passive scalar transport. • We develop a structure-based set of scale equations for the scalar field and couple them to the IPRM. • The complete model is evaluated for several cases of homogeneous deformation with good results. • We outline steps for coupling the new scalar scales to the Algebraic Structure-Based Model (ASBM). - Abstract: A structure-based model has been constructed, for the first time, for the study of passive scalar transport in turbulent flows. The scalar variance and the large-scale scalar gradient variance are proposed as the two turbulence scales needed for closure of the scalar equations in the framework of the Interacting Particle Representation Model (IPRM). The scalar dissipation rate is modeled in terms of the scalar variance and the large-scale enstrophy of the velocity field. Model parameters are defined by matching the decay rates in freely isotropic turbulence. The model is validated for a large number of cases of deformation in both fixed and rotating frames, showing encouraging results. The model shows good agreement with DNS results for the case of pure shear flow in the presence of either transverse or streamwise mean scalar gradient, while it correctly predicts the presence of direct cascade for the passive scalar variance in two dimensional isotropic turbulence.

  15. Sustaining diversity in trait-based models of phytoplankton communities

    Directory of Open Access Journals (Sweden)

    Agostino eMerico

    2014-10-01

    Full Text Available It is well-established that when equilibrium is attained for two species competing for the same limiting resource in a stable, uniform environment, one species will eliminate the other due to competitive exclusion. While competitive exclusion is observed in laboratory experiments and ecological models, the phenomenon seems less common in nature, where static equilibrium is prevented by the fluctuating physical environment and by other factors that constantly change species abundances and the nature of competitive interactions. Trait-based models of phytoplankton communities appear to be useful tools for describing the evolution of large assemblages of species with aggregate group properties such as total biomass, mean trait, and trait variance, the latter representing the functional diversity of the community. Such an approach, however, is limited by the tendency of the trait variance to unrealistically decline to zero over time. This tendency to lose diversity, and therefore adaptive capacity, is typically solved by fixing the variance or by considering exogenous processes such as immigration. Exogenous processes, however, cannot explain the maintenance of adaptive capacity often observed in the closed environment of chemostat experiments. Here we present a new method to sustain diversity in adaptive trait-based models of phytoplankton communities based on a mechanism of trait diffusion through subsequent generations. Our modeling approach can therefore account for endogenous processes such as rapid evolution or transgenerational trait plasticity.

  16. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  17. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  18. A statistic to estimate the variance of the histogram-based mutual information estimator based on dependent pairs of observations

    NARCIS (Netherlands)

    Moddemeijer, R

    In the case of two signals with independent pairs of observations (x(n),y(n)) a statistic to estimate the variance of the histogram based mutual information estimator has been derived earlier. We present such a statistic for dependent pairs. To derive this statistic it is necessary to avail of a

  19. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  20. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  1. Mixed model approaches for diallel analysis based on a bio-model.

    Science.gov (United States)

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  2. Variance Component Selection With Applications to Microbiome Taxonomic Data

    Directory of Open Access Journals (Sweden)

    Jing Zhai

    2018-03-01

    Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  3. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Directory of Open Access Journals (Sweden)

    Zdravko Kačič

    2009-01-01

    Full Text Available This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE. The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  4. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Science.gov (United States)

    Kos, Marko; Grašič, Matej; Kačič, Zdravko

    2009-12-01

    This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE). The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  5. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  6. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  7. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  8. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  9. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  10. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  11. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  12. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  13. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  14. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  15. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  16. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  17. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  18. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  19. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  20. Isolating Trait and Method Variance in the Measurement of Callous and Unemotional Traits.

    Science.gov (United States)

    Paiva-Salisbury, Melissa L; Gill, Andrew D; Stickle, Timothy R

    2017-09-01

    To examine hypothesized influence of method variance from negatively keyed items in measurement of callous-unemotional (CU) traits, nine a priori confirmatory factor analysis model comparisons of the Inventory of Callous-Unemotional Traits were evaluated on multiple fit indices and theoretical coherence. Tested models included a unidimensional model, a three-factor model, a three-bifactor model, an item response theory-shortened model, two item-parceled models, and three correlated trait-correlated method minus one models (unidimensional, correlated three-factor, and bifactor). Data were self-reports of 234 adolescents (191 juvenile offenders, 43 high school students; 63% male; ages 11-17 years). Consistent with hypotheses, models accounting for method variance substantially improved fit to the data. Additionally, bifactor models with a general CU factor better fit the data compared with correlated factor models, suggesting a general CU factor is important to understanding the construct of CU traits. Future Inventory of Callous-Unemotional Traits analyses should account for method variance from item keying and response bias to isolate trait variance.

  1. On Mean-Variance Hedging of Bond Options with Stochastic Risk Premium Factor

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Kumar, Suresh K.

    2014-01-01

    We consider the mean-variance hedging problem for pricing bond options using the yield curve as the observation. The model considered contains infinite-dimensional noise sources with the stochastically- varying risk premium. Hence our model is incomplete. We consider mean-variance hedging under the

  2. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  3. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  4. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  5. Low Variance Couplings for Stochastic Models of Intracellular Processes with Time-Dependent Rate Functions.

    Science.gov (United States)

    Anderson, David F; Yuan, Chaojie

    2018-04-18

    A number of coupling strategies are presented for stochastically modeled biochemical processes with time-dependent parameters. In particular, the stacked coupling is introduced and is shown via a number of examples to provide an exceptionally low variance between the generated paths. This coupling will be useful in the numerical computation of parametric sensitivities and the fast estimation of expectations via multilevel Monte Carlo methods. We provide the requisite estimators in both cases.

  6. Mean – Variance parametric Model for the Classification based on Cries of Babies

    OpenAIRE

    Khalid Nazim S. A; Dr. M.B Sanjay Pande

    2010-01-01

    Cry is a feature which makes a individual to take certain care about the infant which has initiated it. It is also equally understood that cry makes a person to take certain steps. In the present work, we have tried to implement a mathematical model which can classify the cry into its cluster or group based on certain parameters based on which a cry is classified into a normal or abnormal. To corroborate the methodology we taken 17 distinguished features of cry. The implemented mathematical m...

  7. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  8. A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2015-01-01

    Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.

  9. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    Science.gov (United States)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  10. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  11. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    Science.gov (United States)

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  12. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  13. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  14. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  15. The role of respondents’ comfort for variance in stated choice surveys

    DEFF Research Database (Denmark)

    Emang, Diana; Lundhede, Thomas; Thorsen, Bo Jellesmark

    2017-01-01

    they complete surveys correlates with the error variance in stated choice models of their responses. Comfort-related variables are included in the scale functions of the scaled multinomial logit models. The hypothesis was that higher comfort reduces error variance in answers, as revealed by a higher scale...... parameter and vice versa. Information on, e.g., sleep and time since eating (higher comfort) correlated with scale heterogeneity, and produced lower error variance when controlled for in the model. That respondents’ comfort may influence choice behavior suggests that knowledge of the respondents’ activity......Preference elicitation among outdoor recreational users is subject to measurement errors that depend, in part, on survey planning. This study uses data from a choice experiment survey on recreational SCUBA diving to investigate whether self-reported information on respondents’ comfort when...

  16. The variance of the model representation of nuclear power generation and its implication to the climate change mitigation assessment

    International Nuclear Information System (INIS)

    Wada, Kenichi; Sano, Fuminori; Oshima, Kanji; Akimoto, Keigo

    2013-01-01

    Nuclear power secures affordable carbon-free energy supply, but entails various risks and constraints, such as safety concerns, waste disposal protest campaign, and proliferation. Given the nature of these characteristics of nuclear power generation, there is wide range of variations in representation of nuclear power technologies across models. In this paper, we explore the variance of the model representation of nuclear power generation and its implication to the climate change mitigation assessment, based on the EMF27 study. The most common result is that under efforts to mitigate climate change more nuclear energy use is needed. We find, however, that perspectives on the contribution of nuclear energy to global energy needs vary tremendously among the modeling teams. This diversity mainly comes from the difference in the level of detail that characterize nuclear energy technologies and the broad range of nuclear contributions in the long-term scenarios of global energy use. (author)

  17. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    Science.gov (United States)

    Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  18. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  19. Modelling temporal variance of component temperatures and directional anisotropy over vegetated canopy

    Science.gov (United States)

    Bian, Zunjian; du, yongming; li, hua

    2016-04-01

    Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible

  20. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  1. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  2. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  3. Eigenspace-Based Minimum Variance Adaptive Beamformer Combined with Delay Multiply and Sum: Experimental Study

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2017-01-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...

  4. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  5. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  6. Variance of the number of tumors in a model for the induction of osteosarcoma by alpha radiation

    International Nuclear Information System (INIS)

    Groer, P.G.; Marshall, J.H.

    1976-01-01

    An earlier report on a model for the induction of osteosarcoma by alpha radiation gave differential equations for the mean numbers of normal, transformed, and malignant cells. In this report we show that for a constant dose rate the variance of the number of cells at each stage and time is equal to the corresponding mean, so the numbers of tumors predicted by the model have a Poisson distribution about their mean values

  7. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Blakeman, Edward D [ORNL; Peplow, Douglas E. [ORNL; Wagner, John C [ORNL; Murphy, Brian D [ORNL; Mueller, Don [ORNL

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.

  8. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    International Nuclear Information System (INIS)

    Blakeman, Edward D.; Peplow, Douglas E.; Wagner, John C.; Murphy, Brian D.; Mueller, Don

    2007-01-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts

  9. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  10. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  11. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  12. Visual Basic, Excel-based fish population modeling tool - The pallid sturgeon example

    Science.gov (United States)

    Moran, Edward H.; Wildhaber, Mark L.; Green, Nicholas S.; Albers, Janice L.

    2016-02-10

    The model presented in this report is a spreadsheet-based model using Visual Basic for Applications within Microsoft Excel (http://dx.doi.org/10.5066/F7057D0Z) prepared in cooperation with the U.S. Army Corps of Engineers and U.S. Fish and Wildlife Service. It uses the same model structure and, initially, parameters as used by Wildhaber and others (2015) for pallid sturgeon. The difference between the model structure used for this report and that used by Wildhaber and others (2015) is that variance is not partitioned. For the model of this report, all variance is applied at the iteration and time-step levels of the model. Wildhaber and others (2015) partition variance into parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level and temporal variance (uncertainty caused by random environmental fluctuations with time) applied at the time-step level. They included implicit individual variance (uncertainty caused by differences between individuals) within the time-step level.The interface developed for the model of this report is designed to allow the user the flexibility to change population model structure and parameter values and uncertainty separately for every component of the model. This flexibility makes the modeling tool potentially applicable to any fish species; however, the flexibility inherent in this modeling tool makes it possible for the user to obtain spurious outputs. The value and reliability of the model outputs are only as good as the model inputs. Using this modeling tool with improper or inaccurate parameter values, or for species for which the structure of the model is inappropriate, could lead to untenable management decisions. By facilitating fish population modeling, this modeling tool allows the user to evaluate a range of management options and implications. The goal of this modeling tool is to be a user-friendly modeling tool for developing fish population models useful to natural resource

  13. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  14. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  15. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    Directory of Open Access Journals (Sweden)

    Jan Jurczyk

    Full Text Available The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  16. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  17. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  18. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  19. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  20. Parametric study and global sensitivity analysis for co-pyrolysis of rape straw and waste tire via variance-based decomposition.

    Science.gov (United States)

    Xu, Li; Jiang, Yong; Qiu, Rong

    2018-01-01

    In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    Science.gov (United States)

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  2. The link between response time and preference, variance and processing heterogeneity in stated choice experiments

    DEFF Research Database (Denmark)

    Campbell, Danny; Mørkbak, Morten Raun; Olsen, Søren Bøye

    2018-01-01

    In this article we utilize the time respondents require to answer a self-administered online stated preference survey. While the effects of response time have been previously explored, this article proposes a different approach that explicitly recognizes the highly equivocal relationship between ...... between response time and utility coefficients, error variance and processing strategies. Our results thus emphasize the importance of considering response time when modeling stated choice data....... response time and respondents' choices. In particular, we attempt to disentangle preference, variance and processing heterogeneity and explore whether response time helps to explain these three types of heterogeneity. For this, we divide the data (ordered by response time) into approximately equal......-sized subsets, and then derive different class membership probabilities for each subset. We estimate a large number of candidate models and subsequently conduct a frequentist-based model averaging approach using information criteria to derive weights of evidence for each model. Our findings show a clear link...

  3. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  4. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  5. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  6. Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Malvagi, F.

    2012-01-01

    The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solution is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)

  7. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  8. Impact of time-inhomogeneous jumps and leverage type effects on returns and realised variances

    DEFF Research Database (Denmark)

    Veraart, Almut

    This paper studies the effect of time-inhomogeneous jumps and leverage type effects on realised variance calculations when the logarithmic asset price is given by a Lévy-driven stochastic volatility model. In such a model, the realised variance is an inconsistent estimator of the integrated...

  9. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    Science.gov (United States)

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  10. Minimal variance hedging of natural gas derivatives in exponential Lévy models: Theory and empirical performance

    International Nuclear Information System (INIS)

    Ewald, Christian-Oliver; Nawar, Roy; Siu, Tak Kuen

    2013-01-01

    We consider the problem of hedging European options written on natural gas futures, in a market where prices of traded assets exhibit jumps, by trading in the underlying asset. We provide a general expression for the hedging strategy which minimizes the variance of the terminal hedging error, in terms of stochastic integral representations of the payoffs of the options involved. This formula is then applied to compute hedge ratios for common options in various models with jumps, leading to easily computable expressions. As a benchmark we take the standard Black–Scholes and Merton delta hedges. We show that in natural gas option markets minimal variance hedging with underlying consistently outperform the benchmarks by quite a margin. - Highlights: ► We derive hedging strategies for European type options written on natural gas futures. ► These are tested empirically using Henry Hub natural gas futures and options data. ► We find that our hedges systematically outperform classical benchmarks

  11. The Risk Return Relationship: Evidence from Index Return and Realised Variance Series

    OpenAIRE

    Minxian Yang

    2014-01-01

    The risk return relationship is analysed in bivariate models for return and realised variance(RV) series. Based on daily time series from 21 international market indices for more than 13 years (January 2000 to February 2013), the empirical findings support the arguments of risk return tradeoff, volatility feedback and statistical balance. It is reasoned that the empirical risk return relationship is primarily shaped by two important data features: the negative contemporaneous correlation betw...

  12. Comparison of Global Distributions of Zonal-Mean Gravity Wave Variance Inferred from Different Satellite Instruments

    Science.gov (United States)

    Preusse, Peter; Eckermann, Stephen D.; Offermann, Dirk; Jackman, Charles H. (Technical Monitor)

    2000-01-01

    Gravity wave temperature fluctuations acquired by the CRISTA instrument are compared to previous estimates of zonal-mean gravity wave temperature variance inferred from the LIMS, MLS and GPS/MET satellite instruments during northern winter. Careful attention is paid to the range of vertical wavelengths resolved by each instrument. Good agreement between CRISTA data and previously published results from LIMS, MLS and GPS/MET are found. Key latitudinal features in these variances are consistent with previous findings from ground-based measurements and some simple models. We conclude that all four satellite instruments provide reliable global data on zonal-mean gravity wave temperature fluctuations throughout the middle atmosphere.

  13. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  14. Genetic and environmental variances of bone microarchitecture and bone remodeling markers: a twin study.

    Science.gov (United States)

    Bjørnerem, Åshild; Bui, Minh; Wang, Xiaofang; Ghasem-Zadeh, Ali; Hopper, John L; Zebaze, Roger; Seeman, Ego

    2015-03-01

    All genetic and environmental factors contributing to differences in bone structure between individuals mediate their effects through the final common cellular pathway of bone modeling and remodeling. We hypothesized that genetic factors account for most of the population variance of cortical and trabecular microstructure, in particular intracortical porosity and medullary size - void volumes (porosity), which establish the internal bone surface areas or interfaces upon which modeling and remodeling deposit or remove bone to configure bone microarchitecture. Microarchitecture of the distal tibia and distal radius and remodeling markers were measured for 95 monozygotic (MZ) and 66 dizygotic (DZ) white female twin pairs aged 40 to 61 years. Images obtained using high-resolution peripheral quantitative computed tomography were analyzed using StrAx1.0, a nonthreshold-based software that quantifies cortical matrix and porosity. Genetic and environmental components of variance were estimated under the assumptions of the classic twin model. The data were consistent with the proportion of variance accounted for by genetic factors being: 72% to 81% (standard errors ∼18%) for the distal tibial total, cortical, and medullary cross-sectional area (CSA); 67% and 61% for total cortical porosity, before and after adjusting for total CSA, respectively; 51% for trabecular volumetric bone mineral density (vBMD; all p accounted for 47% to 68% of the variance (all p ≤ 0.001). Cross-twin cross-trait correlations between tibial cortical porosity and medullary CSA were higher for MZ (rMZ  = 0.49) than DZ (rDZ  = 0.27) pairs before (p = 0.024), but not after (p = 0.258), adjusting for total CSA. For the remodeling markers, the data were consistent with genetic factors accounting for 55% to 62% of the variance. We infer that middle-aged women differ in their bone microarchitecture and remodeling markers more because of differences in their genetic factors than

  15. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  16. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  17. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  18. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  19. Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2015-05-01

    Full Text Available Segmentation, which is usually the first step in object-based image analysis (OBIA, greatly influences the quality of final OBIA results. In many existing multi-scale segmentation algorithms, a common problem is that under-segmentation and over-segmentation always coexist at any scale. To address this issue, we propose a new method that integrates the newly developed constrained spectral variance difference (CSVD and the edge penalty (EP. First, initial segments are produced by a fast scan. Second, the generated segments are merged via a global mutual best-fitting strategy using the CSVD and EP as merging criteria. Finally, very small objects are merged with their nearest neighbors to eliminate the remaining noise. A series of experiments based on three sets of remote sensing images, each with different spatial resolutions, were conducted to evaluate the effectiveness of the proposed method. Both visual and quantitative assessments were performed, and the results show that large objects were better preserved as integral entities while small objects were also still effectively delineated. The results were also found to be superior to those from eCongnition’s multi-scale segmentation.

  20. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  1. EDOVE: Energy and Depth Variance-Based Opportunistic Void Avoidance Scheme for Underwater Acoustic Sensor Networks.

    Science.gov (United States)

    Bouk, Safdar Hussain; Ahmed, Syed Hassan; Park, Kyung-Joon; Eun, Yongsoon

    2017-09-26

    Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node's depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15 % packet delivery ratio, propagates 50 % less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes.

  2. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  3. Is fMRI ?noise? really noise? Resting state nuisance regressors remove variance with network structure

    OpenAIRE

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed ...

  4. A Realized Variance for the Whole Day Based on Intermittent High-Frequency Data

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2005-01-01

    We consider the problem of deriving an empirical measure of daily integrated variance (IV) in the situation where high-frequency price data are unavailable for part of the day. We study three estimators in this context and characterize the assumptions that justify their use. We show that the opti......We consider the problem of deriving an empirical measure of daily integrated variance (IV) in the situation where high-frequency price data are unavailable for part of the day. We study three estimators in this context and characterize the assumptions that justify their use. We show...

  5. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  6. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  7. Search-free license plate localization based on saliency and local variance estimation

    Science.gov (United States)

    Safaei, Amin; Tang, H. L.; Sanei, S.

    2015-02-01

    In recent years, the performance and accuracy of automatic license plate number recognition (ALPR) systems have greatly improved, however the increasing number of applications for such systems have made ALPR research more challenging than ever. The inherent computational complexity of search dependent algorithms remains a major problem for current ALPR systems. This paper proposes a novel search-free method of localization based on the estimation of saliency and local variance. Gabor functions are then used to validate the choice of candidate license plate. The algorithm was applied to three image datasets with different levels of complexity and the results compared with a number of benchmark methods, particularly in terms of speed. The proposed method outperforms the state of the art methods and can be used for real time applications.

  8. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    Science.gov (United States)

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  9. Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Dumonteil, E.; Malvagi, F. [Commissariat a l' Energie Atomique et Aux Energies Alternatives, CEA SACLAY DEN, Laboratoire de Transport Stochastique et Deterministe, 91191 Gif-sur-Yvette (France)

    2012-07-01

    The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solution is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)

  10. Penerapan Model Multivariat Analisis of Variance dalam Mengukur Persepsi Destinasi Wisata

    Directory of Open Access Journals (Sweden)

    Robert Tang Herman

    2012-05-01

    Full Text Available The purpose of this research is to provide conceptual and infrastructure tools for Dinas Pariwisata DKI Jakarta to improve their capabilities for evaluating business performance based on market responsiveness. Capturing market responsiveness is the initial research to make industry mapping. Research steps started with secondary research to build data classification system. The second is primary research by collecting the data from market research. Data sources for secondary data were collected from Dinas Pariwisata DKI, while the primary data were collected from survey method using quetionaires addressed to the whole market. Then, analyze the data colleted with multivariate analysis of variance to develop the mapping. The result of cluster analysis distinguishes the potential market based on their responses to the industry classification, make the classification system, find the gaps and how important are they, and the another issue related to the role of the mapping system. So, this mapping system will help Dinas Pariwisata DKI to improve capabilities and the business performance based on the market responsiveness and, which is the potential market for each specific classification, know what their needs, wants and demand from that classification. This research contribution can be used to give the recommendation to Dinas Pariwisata DKI to deliver what market needs and wants to all the tourism place based on this classification resulting, to develop the market growth estimation; and for the long term is to improve the economic and market growth.

  11. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  12. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  13. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  14. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  15. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  16. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  17. A flexible model for the mean and variance functions, with application to medical cost data.

    Science.gov (United States)

    Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T

    2013-10-30

    Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.

  18. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  19. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    Science.gov (United States)

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  20. AN ADAPTIVE OPTIMAL KALMAN FILTER FOR STOCHASTIC VIBRATION CONTROL SYSTEM WITH UNKNOWN NOISE VARIANCES

    Institute of Scientific and Technical Information of China (English)

    Li Shu; Zhuo Jiashou; Ren Qingwen

    2000-01-01

    In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.

  1. Algebraic aspects of evolution partial differential equation arising in the study of constant elasticity of variance model from financial mathematics

    Science.gov (United States)

    Motsepa, Tanki; Aziz, Taha; Fatima, Aeeman; Khalique, Chaudry Masood

    2018-03-01

    The optimal investment-consumption problem under the constant elasticity of variance (CEV) model is investigated from the perspective of Lie group analysis. The Lie symmetry group of the evolution partial differential equation describing the CEV model is derived. The Lie point symmetries are then used to obtain an exact solution of the governing model satisfying a standard terminal condition. Finally, we construct conservation laws of the underlying equation using the general theorem on conservation laws.

  2. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  3. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  4. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  5. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  6. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  7. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  8. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2017-01-01

    The discounted stock price under the Constant Elasticity of Variance model is not a martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the price for which put-call parity holds and the price that represents the lowest cost of

  9. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2014-01-01

    The discounted stock price under the Constant Elasticity of Variance (CEV) model is a strict local martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the risk-neutral call price and an alternative price that is linked to the unique

  10. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  11. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target.

    Science.gov (United States)

    Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M

    2014-01-20

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.

  12. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target

    International Nuclear Information System (INIS)

    Budiarto, E; Keijzer, M; Heemink, A W; Storchi, P R M; Breedveld, S; Heijmen, B J M

    2014-01-01

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements. (paper)

  13. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  14. Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.

    Science.gov (United States)

    Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong

    2018-03-01

    Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.

  15. Genetic Variance Partitioning and Genome-Wide Prediction with Allele Dosage Information in Autotetraploid Potato.

    Science.gov (United States)

    Endelman, Jeffrey B; Carley, Cari A Schmitz; Bethke, Paul C; Coombs, Joseph J; Clough, Mark E; da Silva, Washington L; De Jong, Walter S; Douches, David S; Frederick, Curtis M; Haynes, Kathleen G; Holm, David G; Miller, J Creighton; Muñoz, Patricio R; Navarro, Felix M; Novy, Richard G; Palta, Jiwan P; Porter, Gregory A; Rak, Kyle T; Sathuvalli, Vidyasagar R; Thompson, Asunta L; Yencho, G Craig

    2018-05-01

    As one of the world's most important food crops, the potato ( Solanum tuberosum L.) has spurred innovation in autotetraploid genetics, including in the use of SNP arrays to determine allele dosage at thousands of markers. By combining genotype and pedigree information with phenotype data for economically important traits, the objectives of this study were to (1) partition the genetic variance into additive vs. nonadditive components, and (2) determine the accuracy of genome-wide prediction. Between 2012 and 2017, a training population of 571 clones was evaluated for total yield, specific gravity, and chip fry color. Genomic covariance matrices for additive ( G ), digenic dominant ( D ), and additive × additive epistatic ( G # G ) effects were calculated using 3895 markers, and the numerator relationship matrix ( A ) was calculated from a 13-generation pedigree. Based on model fit and prediction accuracy, mixed model analysis with G was superior to A for yield and fry color but not specific gravity. The amount of additive genetic variance captured by markers was 20% of the total genetic variance for specific gravity, compared to 45% for yield and fry color. Within the training population, including nonadditive effects improved accuracy and/or bias for all three traits when predicting total genotypic value. When six F 1 populations were used for validation, prediction accuracy ranged from 0.06 to 0.63 and was consistently lower (0.13 on average) without allele dosage information. We conclude that genome-wide prediction is feasible in potato and that it will improve selection for breeding value given the substantial amount of nonadditive genetic variance in elite germplasm. Copyright © 2018 by the Genetics Society of America.

  16. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per

    2012-01-01

    was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...

  17. VARIANCE COMPONENTS AND SELECTION FOR FEATHER PECKING BEHAVIOR IN LAYING HENS

    OpenAIRE

    Su, Guosheng; Kjaer, Jørgen B.; Sørensen, Poul

    2005-01-01

    Variance components and selection response for feather pecking behaviour were studied by analysing the data from a divergent selection experiment. An investigation show that a Box-Cox transformation with power =-0.2 made the data be approximately normally distributed and fit best by the given model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the two traits in both low feather peckin...

  18. Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation

    Science.gov (United States)

    Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong

    2017-05-01

    Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.

  19. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  20. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  1. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    Science.gov (United States)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  2. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  3. The Pricing of European Options Under the Constant Elasticity of Variance with Stochastic Volatility

    Science.gov (United States)

    Bock, Bounghun; Choi, Sun-Yong; Kim, Jeong-Hoon

    This paper considers a hybrid risky asset price model given by a constant elasticity of variance multiplied by a stochastic volatility factor. A multiscale analysis leads to an asymptotic pricing formula for both European vanilla option and a Barrier option near the zero elasticity of variance. The accuracy of the approximation is provided in a rigorous manner. A numerical experiment for implied volatilities shows that the hybrid model improves some of the well-known models in view of fitting the data for different maturities.

  4. Mean-Variance portfolio optimization when each asset has individual uncertain exit-time

    Directory of Open Access Journals (Sweden)

    Reza Keykhaei

    2016-12-01

    Full Text Available The standard Markowitz Mean-Variance optimization model is a single-period portfolio selection approach where the exit-time (or the time-horizon is deterministic. ‎In this paper we study the Mean-Variance portfolio selection problem ‎with ‎uncertain ‎exit-time ‎when ‎each ‎has ‎individual uncertain ‎xit-time‎, ‎which generalizes the Markowitz's model‎. ‎‎‎‎‎‎We provide some conditions under which the optimal portfolio of the generalized problem is independent of the exit-times distributions. Also, ‎‎it is shown that under some general circumstances, the sets of optimal portfolios‎ ‎in the generalized model and the standard model are the same‎.

  5. Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products.

    Science.gov (United States)

    Shorten, P R; Membré, J-M; Pleasants, A B; Kubaczka, M; Soboleva, T K

    2004-06-01

    The objective of this paper was to estimate and partition the variability in the microbial growth model parameters describing the growth of Erwinia carotovora on pasteurised and non-pasteurised vegetable juice from laboratory experiments performed under different temperature-varying conditions. We partitioned the model parameter variance and covariance components into effects due to temperature profile and replicate using a maximum likelihood technique. Temperature profile and replicate were treated as random effects and the food substrate was treated as a fixed effect. The replicate variance component was small indicating a high level of control in this experiment. Our analysis of the combined E. carotovora growth data sets used the Baranyi primary microbial growth model along with the Ratkowsky secondary growth model. The variability in the microbial growth parameters estimated from these microbial growth experiments is essential for predicting the mean and variance through time of the E. carotovora population size in a product supply chain and is the basis for microbiological risk assessment and food product shelf-life estimation. The variance partitioning made here also assists in the management of optimal product distribution networks by identifying elements of the supply chain contributing most to product variability. Copyright 2003 Elsevier B.V.

  6. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  7. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  8. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  9. Q-Matrix Optimization Based on the Linear Logistic Test Model.

    Science.gov (United States)

    Ma, Lin; Green, Kelly E

    This study explored optimization of item-attribute matrices with the linear logistic test model (Fischer, 1973), with optimal models explaining more variance in item difficulty due to identified item attributes. Data were 8th-grade mathematics test item responses of two TIMSS 2007 booklets. The study investigated three categories of attributes (content, cognitive process, and comprehensive cognitive process) at two grain levels (larger, smaller) and also compared results with random attribute matrices. The proposed attributes accounted for most of the variance in item difficulty for two assessment booklets (81% and 65%). The variance explained by the content attributes was very small (13% to 31%), less than variance explained by the comprehensive cognitive process attributes which explained much more variance than the content and cognitive process attributes. The variances explained by the grain level were similar to each other. However, the attributes did not predict the item difficulties of two assessment booklets equally.

  10. Adding a Parameter Increases the Variance of an Estimated Regression Function

    Science.gov (United States)

    Withers, Christopher S.; Nadarajah, Saralees

    2011-01-01

    The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…

  11. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  12. Multi-population Genomic Relationships for Estimating Current Genetic Variances Within and Genetic Correlations Between Populations.

    Science.gov (United States)

    Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L

    2017-10-01

    Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.

  13. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  14. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  15. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  16. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  17. Identification of melanoma cells: a method based in mean variance of signatures via spectral densities.

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Angulo-Molina, Aracely

    2017-04-01

    In this paper a new methodology to detect and differentiate melanoma cells from normal cells through 1D-signatures averaged variances calculated with a binary mask is presented. The sample images were obtained from histological sections of mice melanoma tumor of 4 [Formula: see text] in thickness and contrasted with normal cells. The results show that melanoma cells present a well-defined range of averaged variances values obtained from the signatures in the four conditions used.

  18. Modeling Attitude Variance in Small UAS’s for Acoustic Signature Simplification Using Experimental Design in a Hardware-in-the-Loop Simulation

    Science.gov (United States)

    2015-03-26

    response. Additionally, choosing correlated levels for multiple factors results in multicollinearity which can cause problems such as model...misspecification or large variances and covariances for the regression coefficients. A good way to avoid multicollinearity is to use orthogonal, factorial

  19. Recognition Memory zROC Slopes for Items with Correct versus Incorrect Source Decisions Discriminate the Dual Process and Unequal Variance Signal Detection Models

    Science.gov (United States)

    Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.

    2014-01-01

    We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…

  20. Use of hypotheses for analysis of variance Models: Challenging the current practice

    NARCIS (Netherlands)

    van Wesel, F.; Boeije, H.R.; Hoijtink, H

    2013-01-01

    In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current

  1. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  2. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  3. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  4. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  5. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  6. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  7. Is fMRI “noise” really noise? Resting state nuisance regressors remove variance with network structure

    Science.gov (United States)

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured “signal” as well as “noise.” Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. PMID:25862264

  8. The efficiency of the crude oil markets: Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  9. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  10. The efficiency of the crude oil markets. Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  11. What's in a Day? A Guide to Decomposing the Variance in Intensive Longitudinal Data.

    Science.gov (United States)

    de Haan-Rietdijk, Silvia; Kuppens, Peter; Hamaker, Ellen L

    2016-01-01

    In recent years there has been a growing interest in the use of intensive longitudinal research designs to study within-person processes. Examples are studies that use experience sampling data and autoregressive modeling to investigate emotion dynamics and between-person differences therein. Such designs often involve multiple measurements per day and multiple days per person, and it is not clear how this nesting of the data should be accounted for: That is, should such data be considered as two-level data (which is common practice at this point), with occasions nested in persons, or as three-level data with beeps nested in days which are nested in persons. We show that a significance test of the day-level variance in an empty three-level model is not reliable when there is autocorrelation. Furthermore, we show that misspecifying the number of levels can lead to spurious or misleading findings, such as inflated variance or autoregression estimates. Throughout the paper we present instructions and R code for the implementation of the proposed models, which includes a novel three-level AR(1) model that estimates moment-to-moment inertia and day-to-day inertia. Based on our simulations we recommend model selection using autoregressive multilevel models in combination with the AIC. We illustrate this method using empirical emotion data from two independent samples, and discuss the implications and the relevance of the existence of a day level for the field.

  12. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  13. The Rise and Fall of S&P500 Variance Futures

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); J.A. Jiménez-Martín (Juan-Ángel); M.J. McAleer (Michael); T. Pérez-Amaral (Teodosio)

    2011-01-01

    textabstractModelling, monitoring and forecasting volatility are indispensible to sensible portfolio risk management. The volatility of an asset of composite index can be traded by using volatility derivatives, such as volatility and variance swaps, options and futures. The most popular volatility

  14. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    Science.gov (United States)

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  15. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  16. Unraveling the genetic architecture of environmental variance of somatic cell score using high-density single nucleotide polymorphism and cow data from experimental farms.

    Science.gov (United States)

    Mulder, H A; Crump, R E; Calus, M P L; Veerkamp, R F

    2013-01-01

    In recent years, it has been shown that not only is the phenotype under genetic control, but also the environmental variance. Very little, however, is known about the genetic architecture of environmental variance. The main objective of this study was to unravel the genetic architecture of the mean and environmental variance of somatic cell score (SCS) by identifying genome-wide associations for mean and environmental variance of SCS in dairy cows and by quantifying the accuracy of genome-wide breeding values. Somatic cell score was used because previous research has shown that the environmental variance of SCS is partly under genetic control and reduction of the variance of SCS by selection is desirable. In this study, we used 37,590 single nucleotide polymorphism (SNP) genotypes and 46,353 test-day records of 1,642 cows at experimental research farms in 4 countries in Europe. We used a genomic relationship matrix in a double hierarchical generalized linear model to estimate genome-wide breeding values and genetic parameters. The estimated mean and environmental variance per cow was used in a Bayesian multi-locus model to identify SNP associated with either the mean or the environmental variance of SCS. Based on the obtained accuracy of genome-wide breeding values, 985 and 541 independent chromosome segments affecting the mean and environmental variance of SCS, respectively, were identified. Using a genomic relationship matrix increased the accuracy of breeding values relative to using a pedigree relationship matrix. In total, 43 SNP were significantly associated with either the mean (22) or the environmental variance of SCS (21). The SNP with the highest Bayes factor was on chromosome 9 (Hapmap31053-BTA-111664) explaining approximately 3% of the genetic variance of the environmental variance of SCS. Other significant SNP explained less than 1% of the genetic variance. It can be concluded that fewer genomic regions affect the environmental variance of SCS than the

  17. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  18. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  19. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  20. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  1. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  2. Genomic selection in mink yield higher accuracies with a Bayesian approach allowing for heterogeneous variance than a GBLUP model

    DEFF Research Database (Denmark)

    Villumsen, Trine Michelle; Su, Guosheng; Cai, Zexi

    2018-01-01

    by sequencing. Four live grading traits and four traits on dried pelts for size and quality were analysed. GWAS analysis detected significant SNPs for all the traits. The single-trait Bayesian model resulted in higher accuracies for the genomic predictions than the single-trait GBLUP model, especially......The accuracy of genomic prediction for mink was compared for single-trait and multiple-trait GBLUP models and Bayesian models that allowed for heterogeneous (co)variance structure over the genome. The mink population consisted of 2,103 brown minks genotyped with the method of genotyping...... for the traits measured on dried pelts. We expected the multiple-trait models to be superior to the single trait models since the multiple-trait model can make use of information when traits are correlated. However, we did not find a general improvement in accuracies with the multiple-trait models compared...

  3. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  4. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  5. Mean and variance evolutions of the hot and cold temperatures in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Parey, Sylvie [EDF/R and D, Chatou Cedex (France); Dacunha-Castelle, D. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); Hoang, T.T.H. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); EDF/R and D, Chatou Cedex (France)

    2010-02-15

    In this paper, we examine the trends of temperature series in Europe, for the mean as well as for the variance in hot and cold seasons. To do so, we use as long and homogenous series as possible, provided by the European Climate Assessment and Dataset project for different locations in Europe, as well as the European ENSEMBLES project gridded dataset and the ERA40 reanalysis. We provide a definition of trends that we keep as intrinsic as possible and apply non-parametric statistical methods to analyse them. Obtained results show a clear link between trends in mean and variance of the whole series of hot or cold temperatures: in general, variance increases when the absolute value of temperature increases, i.e. with increasing summer temperature and decreasing winter temperature. This link is reinforced in locations where winter and summer climate has more variability. In very cold or very warm climates, the variability is lower and the link between the trends is weaker. We performed the same analysis on outputs of six climate models proposed by European teams for the 1961-2000 period (1950-2000 for one model), available through the PCMDI portal for the IPCC fourth assessment climate model simulations. The models generally perform poorly and have difficulties in capturing the relation between the two trends, especially in summer. (orig.)

  6. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  7. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  8. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    International Nuclear Information System (INIS)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs

  9. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    Energy Technology Data Exchange (ETDEWEB)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs.

  10. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling.

    Science.gov (United States)

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-12-07

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  11. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    Directory of Open Access Journals (Sweden)

    Lu Wang

    2016-12-01

    Full Text Available The stability of a fiber optic gyroscope (FOG in measurement while drilling (MWD could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples, the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  12. A Mean-Variance Explanation of FDI Flows to Developing Countries

    DEFF Research Database (Denmark)

    Sunesen, Eva Rytter

    country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

  13. Bridging design and behavioral research with variance-based structural equation modeling

    NARCIS (Netherlands)

    Henseler, Jörg

    2017-01-01

    Advertising research is a scientific discipline that studies artifacts (e.g., various forms of marketing communication) as well as natural phenomena (e.g., consumer behavior). Empirical advertising research therefore requires methods that can model design constructs as well as behavioral constructs,

  14. Mean-Variance stochastic goal programming for sustainable mutual funds' portfolio selection.

    Directory of Open Access Journals (Sweden)

    García-Bernabeu, Ana

    2015-11-01

    Full Text Available Mean-Variance Stochastic Goal Programming models (MV-SGP provide satisficing investment solutions in uncertain contexts. In this work, an MV-SGP model is proposed for portfolio selection which includes goals with regards to traditional and sustainable assets. The proposed approach is based on a two-step procedure. In the first step, sustainability and/or financial screens are applied to a set of assets (mutual funds previously evaluated with TOPSIS to determine the opportunity set. In a second step, satisficing portfolios of assets are obtained using a Goal Programming approach. Two different goals are considered. The first goal reflects only the purely financial side of the target while the second goal is referred to the sustainable side. Aversion to Risk Absolute (ARA coefficients are estimated and incorporated in our investment decision making approach using two different approaches.

  15. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    International Nuclear Information System (INIS)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-01-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix

  16. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    Science.gov (United States)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-03-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.

  17. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  18. School Audits and School Improvement: Exploring the Variance Point Concept in Kentucky's... Schools

    Directory of Open Access Journals (Sweden)

    Robert Lyons

    2011-01-01

    Full Text Available As a diagnostic intervention (Bowles, Churchill, Effrat, & McDermott, 2002 for schools failing to meet school improvement goals, Ken-tucky used a scholastic audit process based on nine standards and 88 associated indicators called the Standards and Indicators for School Improvement (SISI. Schools are rated on a scale of 1–4 on each indicator, with a score of 3 considered as fully functional (Kentucky De-partment of Education [KDE], 2002. As part of enacting the legislation, KDE was required to also audit a random sample of schools that did meet school improvement goals; thereby identifying practices present in improving schools that are not present in those failing to improve. These practices were referred to as variance points, and were reported to school leaders annually. Variance points have differed from year to year, and the methodology used by KDE was unclear. Moreover, variance points were reported for all schools without differentiating based upon the level of school (elementary, middle, or high. In this study, we established a transparent methodology for variance point determination that differentiates between elementary, middle, and high schools.

  19. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  20. Estimating additive and non-additive genetic variances and predicting genetic merits using genome-wide dense single nucleotide polymorphism markers.

    Directory of Open Access Journals (Sweden)

    Guosheng Su

    Full Text Available Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1 a simple additive genetic model (MA, 2 a model including both additive and additive by additive epistatic genetic effects (MAE, 3 a model including both additive and dominance genetic effects (MAD, and 4 a full model including all three genetic components (MAED. Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions.

  1. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  2. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  3. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  4. On the mean and variance of the writhe of random polygons

    International Nuclear Information System (INIS)

    Portillo, J; Scharein, R; Arsuaga, J; Vazquez, M; Diao, Y

    2011-01-01

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an 'ideal' conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon.

  5. On the mean and variance of the writhe of random polygons.

    Science.gov (United States)

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  6. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  7. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  8. Scaling law for noise variance and spatial resolution in differential phase contrast computed tomography

    International Nuclear Information System (INIS)

    Chen Guanghong; Zambelli, Joseph; Li Ke; Bevins, Nicholas; Qi Zhihua

    2011-01-01

    Purpose: The noise variance versus spatial resolution relationship in differential phase contrast (DPC) projection imaging and computed tomography (CT) are derived and compared to conventional absorption-based x-ray projection imaging and CT. Methods: The scaling law for DPC-CT is theoretically derived and subsequently validated with phantom results from an experimental Talbot-Lau interferometer system. Results: For the DPC imaging method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both in theory and experimental results, in DPC-CT the noise variance scales with spatial resolution following an inverse linear relationship with fixed slice thickness. Conclusions: The scaling law in DPC-CT implies a lesser noise, and therefore dose, penalty for moving to higher spatial resolutions when compared to conventional absorption-based CT in order to maintain the same contrast-to-noise ratio.

  9. Fuzzy Stabilization for Nonlinear Discrete Ship Steering Stochastic Systems Subject to State Variance and Passivity Constraints

    Directory of Open Access Journals (Sweden)

    Wen-Jer Chang

    2014-01-01

    Full Text Available For nonlinear discrete-time stochastic systems, a fuzzy controller design methodology is developed in this paper subject to state variance constraint and passivity constraint. According to fuzzy model based control technique, the nonlinear discrete-time stochastic systems considered in this paper are represented by the discrete-time Takagi-Sugeno fuzzy models with multiplicative noise. Employing Lyapunov stability theory, upper bound covariance control theory, and passivity theory, some sufficient conditions are derived to find parallel distributed compensation based fuzzy controllers. In order to solve these sufficient conditions, an iterative linear matrix inequality algorithm is applied based on the linear matrix inequality technique. Finally, the fuzzy stabilization problem for nonlinear discrete ship steering stochastic systems is investigated in the numerical example to illustrate the feasibility and validity of proposed fuzzy controller design method.

  10. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  11. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  12. Genetic factors explain half of all variance in serum eosinophil cationic protein

    DEFF Research Database (Denmark)

    Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie

    2014-01-01

    with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....

  13. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    OpenAIRE

    Hoejgaard, B.; Vigna, E.

    2007-01-01

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean-variance approach is then compared to other investment strategies adopted in DC pension schemes, namely the target-based approach and the lifestyle strategy. The comparison is done both in a theoretical...

  14. Variance components and selection response for feather-pecking behavior in laying hens.

    Science.gov (United States)

    Su, G; Kjaer, J B; Sørensen, P

    2005-01-01

    Variance components and selection response for feather pecking behavior were studied by analyzing the data from a divergent selection experiment. An investigation indicated that a Box-Cox transformation with power lambda = -0.2 made the data approximately normally distributed and gave the best fit for the model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the investigated traits in both the low feather-pecking line (LP) and the high feather-pecking line (HP). Based on the mean of marginal posterior distribution, in the Box-Cox transformed scale, heritability for number of feather pecking bouts (FP bouts) was 0.174 in line LP and 0.139 in line HP. For number of feather-pecking pecks (FP pecks), heritability was 0.139 in line LP and 0.105 in line HP. No full-sib group effect and observation pen effect were found in the 2 traits. After 4 generations of selection, the total response for number of FP bouts in the transformed scale was 58 and 74% of the mean of the first generation in line LP and line HP, respectively. The total response for number of FP pecks was 47 and 46% of the mean of the first generation in line LP and line HP, respectively. The variance components and the realized selection response together suggest that genetic selection can be effective in minimizing FP behavior. This would be expected to reduce one of the major welfare problems in laying hens.

  15. Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data

    Science.gov (United States)

    Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.

    The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.

  16. A forecast comparison of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2005-01-01

    We compare 330 ARCH-type models in terms of their ability to describe the conditional variance. The models are compared out-of-sample using DM-$ exchange rate data and IBM return data, where the latter is based on a new data set of realized variance. We find no evidence that a GARCH(1,1) is outpe......We compare 330 ARCH-type models in terms of their ability to describe the conditional variance. The models are compared out-of-sample using DM-$ exchange rate data and IBM return data, where the latter is based on a new data set of realized variance. We find no evidence that a GARCH(1...... snooping (RC). Our empirical results show that the RC lacks power to an extent that makes it unable to distinguish "good" and "bad" models in our analysis....

  17. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Science.gov (United States)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  18. An R package "VariABEL" for genome-wide searching of potentially interacting loci by testing genotypic variance heterogeneity

    Directory of Open Access Journals (Sweden)

    Struchalin Maksim V

    2012-01-01

    Full Text Available Abstract Background Hundreds of new loci have been discovered by genome-wide association studies of human traits. These studies mostly focused on associations between single locus and a trait. Interactions between genes and between genes and environmental factors are of interest as they can improve our understanding of the genetic background underlying complex traits. Genome-wide testing of complex genetic models is a computationally demanding task. Moreover, testing of such models leads to multiple comparison problems that reduce the probability of new findings. Assuming that the genetic model underlying a complex trait can include hundreds of genes and environmental factors, testing of these models in genome-wide association studies represent substantial difficulties. We and Pare with colleagues (2010 developed a method allowing to overcome such difficulties. The method is based on the fact that loci which are involved in interactions can show genotypic variance heterogeneity of a trait. Genome-wide testing of such heterogeneity can be a fast scanning approach which can point to the interacting genetic variants. Results In this work we present a new method, SVLM, allowing for variance heterogeneity analysis of imputed genetic variation. Type I error and power of this test are investigated and contracted with these of the Levene's test. We also present an R package, VariABEL, implementing existing and newly developed tests. Conclusions Variance heterogeneity analysis is a promising method for detection of potentially interacting loci. New method and software package developed in this work will facilitate such analysis in genome-wide context.

  19. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  20. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  1. Time Consistent Strategies for Mean-Variance Asset-Liability Management Problems

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2013-01-01

    Full Text Available This paper studies the optimal time consistent investment strategies in multiperiod asset-liability management problems under mean-variance criterion. By applying time consistent model of Chen et al. (2013 and employing dynamic programming technique, we derive two-time consistent policies for asset-liability management problems in a market with and without a riskless asset, respectively. We show that the presence of liability does affect the optimal strategy. More specifically, liability leads a parallel shift of optimal time-consistent investment policy. Moreover, for an arbitrarily risk averse investor (under the variance criterion with liability, the time-diversification effects could be ignored in a market with a riskless asset; however, it should be considered in a market without any riskless asset.

  2. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  3. Variances as order parameter and complexity measure for random Boolean networks

    International Nuclear Information System (INIS)

    Luque, Bartolo; Ballesteros, Fernando J; Fernandez, Manuel

    2005-01-01

    Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems

  4. Variances as order parameter and complexity measure for random Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Luque, Bartolo [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Ballesteros, Fernando J [Observatori Astronomic, Universitat de Valencia, Ed. Instituts d' Investigacio, Pol. La Coma s/n, E-46980 Paterna, Valencia (Spain); Fernandez, Manuel [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain)

    2005-02-04

    Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems.

  5. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  6. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  7. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  8. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  9. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  10. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  11. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  12. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  13. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics base...... the independence assumption is rejected. Less fluctuating estimates of the fishing mortality is obtained due to a reduced process error. The improved model does not suffer from correlated residuals unlike the independent model, and the variance of forecasts is decreased....

  14. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.

    Science.gov (United States)

    Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.

    2017-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF

  15. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  16. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  17. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  18. Spatially tuned normalization explains attention modulation variance within neurons.

    Science.gov (United States)

    Ni, Amy M; Maunsell, John H R

    2017-09-01

    Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical

  19. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  20. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  1. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  2. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  3. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  4. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  5. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  6. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  7. The interpersonal problems of the socially avoidant: self and peer shared variance.

    Science.gov (United States)

    Rodebaugh, Thomas L; Gianoli, Mayumi Okada; Turkheimer, Eric; Oltmanns, Thomas F

    2010-05-01

    We demonstrate a means of conservatively combining self and peer data regarding personality pathology and interpersonal behavior through structural equation modeling, focusing on avoidant personality disorder traits as well as those of two comparison personality disorders (dependent and narcissistic). Assessment of the relationship between personality disorder traits and interpersonal problems based on either self or peer data alone would result in counterintuitive findings regarding avoidant personality disorder. In contrast, analysis of the variance shared between self and peer leads to results that are more in keeping with hypothetical relationships between avoidant traits and interpersonal problems. Similar results were found for both dependent personality disorder traits and narcissistic personality disorder traits, exceeding our expectations for this method.

  8. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  9. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.

  10. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  11. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  12. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  13. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  14. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  15. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    Science.gov (United States)

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  16. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  17. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    Science.gov (United States)

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  19. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  20. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  1. Planar and SPECT Monte Carlo acceleration using a variance reduction technique in I131imaging

    International Nuclear Information System (INIS)

    Khosravi, H. R.; Sarkar, S.; Takavar, A.; Saghari, M.; Shahriari, M.

    2007-01-01

    Various variance reduction techniques such as forced detection (FD) have been implemented in Monte Carlo (MC) simulation of nuclear medicine in an effort to decrease the simulation time while keeping accuracy. However most of these techniques still result in very long MC simulation times for being implemented into routine use. Materials and Methods: Convolution-based forced detection (CFD) method as a variance reduction technique was implemented into the well known SlMlND MC photon simulation software. A variety of simulations including point and extended sources in uniform and non-uniform attenuation media, were performed to compare differences between FD and CFD versions of SlMlND modeling for I 131 radionuclide and camera configurations. Experimental measurement of system response function was compared to FD and CFD simulation data. Results: Different simulations using the CFD method agree very well with experimental measurements as well as FD version. CFD simulations of system response function and larger sources in uniform and non-uniform attenuated phantoms also agree well with FD version of SIMIND. Conclusion: CFD has been modeled into the SlMlND MC program and validated. With the current implementation of CFD, simulation times were approximately 10-15 times shorter with similar accuracy and image quality compared with FD MC

  2. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  3. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  4. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference Genetics Selection Evolution 2010, 42:29

    DEFF Research Database (Denmark)

    Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg

    2010-01-01

    Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where...... records exist for the parents). Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to the sire-dam model). Conclusions The new algorithm to estimate genetic parameters via Gibbs sampling solves the bias problems typically occurring in animal...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative...

  5. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  6. Variance and covariance components for liability of piglet survival during different periods

    DEFF Research Database (Denmark)

    Su, G; Sorensen, D; Lund, M S

    2008-01-01

    Variance and covariance components for piglet survival in different periods were estimated from individual records of 133 004 Danish Landrace piglets and 89 928 Danish Yorkshire piglets, using a liability threshold model including both direct and maternal additive genetic effects. At the individu...

  7. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  8. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    Science.gov (United States)

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  9. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  10. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    International Nuclear Information System (INIS)

    Song Ningfang; Yuan Rui; Jin Jing

    2011-01-01

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 0 /h 2 , K = 1.1714exp-3 0 /h 1.5 , B = 1.3185exp-3 0 /h, N = 5.982exp-4 0 /h 0.5 and Q = 5.197exp-7 0 in real time, and tracks degradation of gyro performance from initail values, R = 0.651 0 /h 2 , K = 0.801 0 /h 1.5 , B = 0.385 0 /h, N = 0.0874 0 /h 0.5 and Q = 8.085exp-5 0 , to final estimations, R = 9.548 0 /h 2 , K = 9.524 0 /h 1.5 , B = 2.234 0 /h, N = 0.5594 0 /h 0.5 and Q = 5.113exp-4 0 , due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  11. Diversification in the driveway: mean-variance optimization for greenhouse gas emissions reduction from the next generation of vehicles

    International Nuclear Information System (INIS)

    Oliver Gao, H.; Stasko, Timon H.

    2009-01-01

    Modern portfolio theory is applied to the problem of selecting which vehicle technologies and fuels to use in the next generation of vehicles. Selecting vehicles with the lowest lifetime cost is complicated by the fact that future prices are uncertain, just as selecting securities for an investment portfolio is complicated by the fact that future returns are uncertain. A quadratic program is developed based on modern portfolio theory, with the objective of minimizing the expected lifetime cost of the 'vehicle portfolio'. Constraints limit greenhouse gas emissions, as well as the variance of the cost. A case study is performed for light-duty passenger vehicles in the United States, drawing emissions and usage data from the US Environmental Protection Agency's MOVES and Department of Energy's GREET models, among other sources. Four vehicle technologies are considered: conventional gasoline, conventional diesel, grid-independent (non-plug-in) gasoline-electric hybrid, and flex fuel using E85. Results indicate that much of the uncertainty surrounding cost stems from fuel price fluctuations, and that fuel efficient vehicles can lower cost variance. Hybrids exhibit the lowest cost variances of the technologies considered, making them an arguably financially conservative choice.

  12. Diversification in the driveway: mean-variance optimization for greenhouse gas emissions reduction from the next generation of vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Oliver Gao, H.; Stasko, Timon H. [School of Civil and Environmental Engineering, Cornell University, Ithaca, NY 14853 (United States)

    2009-12-15

    Modern portfolio theory is applied to the problem of selecting which vehicle technologies and fuels to use in the next generation of vehicles. Selecting vehicles with the lowest lifetime cost is complicated by the fact that future prices are uncertain, just as selecting securities for an investment portfolio is complicated by the fact that future returns are uncertain. A quadratic program is developed based on modern portfolio theory, with the objective of minimizing the expected lifetime cost of the 'vehicle portfolio'. Constraints limit greenhouse gas emissions, as well as the variance of the cost. A case study is performed for light-duty passenger vehicles in the United States, drawing emissions and usage data from the US Environmental Protection Agency's MOVES and Department of Energy's GREET models, among other sources. Four vehicle technologies are considered: conventional gasoline, conventional diesel, grid-independent (non-plug-in) gasoline-electric hybrid, and flex fuel using E85. Results indicate that much of the uncertainty surrounding cost stems from fuel price fluctuations, and that fuel efficient vehicles can lower cost variance. Hybrids exhibit the lowest cost variances of the technologies considered, making them an arguably financially conservative choice. (author)

  13. Relationship between turbulence energy and density variance in the solar neighbourhood molecular clouds

    Science.gov (United States)

    Kainulainen, J.; Federrath, C.

    2017-11-01

    The relationship between turbulence energy and gas density variance is a fundamental prediction for turbulence-dominated media and is commonly used in analytic models of star formation. We determine this relationship for 15 molecular clouds in the solar neighbourhood. We use the line widths of the CO molecule as the probe of the turbulence energy (sonic Mach number, ℳs) and three-dimensional models to reconstruct the density probability distribution function (ρ-PDF) of the clouds, derived using near-infrared extinction and Herschel dust emission data, as the probe of the density variance (σs). We find no significant correlation between ℳs and σs among the studied clouds, but we cannot rule out a weak correlation either. In the context of turbulence-dominated gas, the range of the ℳs and σs values corresponds to the model predictions. The data cannot constrain whether the turbulence-driving parameter, b, and/or thermal-to-magnetic pressure ratio, β, vary among the sample clouds. Most clouds are not in agreement with field strengths stronger than given by β ≲ 0.05. A model with b2β/ (β + 1) = 0.30 ± 0.06 provides an adequate fit to the cloud sample as a whole. Based on the average behaviour of the sample, we can rule out three regimes: (i) strong compression combined with a weak magnetic field (b ≳ 0.7 and β ≳ 3); (ii) weak compression (b ≲ 0.35); and (iii) a strong magnetic field (β ≲ 0.1). When we include independent magnetic field strength estimates in the analysis, the data rule out solenoidal driving (b < 0.4) for the majority of the solar neighbourhood clouds. However, most clouds have b parameters larger than unity, which indicates a discrepancy with the turbulence-dominated picture; we discuss the possible reasons for this.

  14. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  15. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  16. The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination

    Directory of Open Access Journals (Sweden)

    Liangping Wu

    2014-08-01

    Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.

  17. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Science.gov (United States)

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  18. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  19. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  20. Waste Isolation Pilot Plant no-migration variance petition

    International Nuclear Information System (INIS)

    1990-01-01

    Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ''no-migration'' demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989

  1. Variance components estimation for farrowing traits of three purebred pigs in Korea

    Directory of Open Access Journals (Sweden)

    Bryan Irvine Lopez

    2017-09-01

    Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.

  2. Flow rate dependent extra-column variance from injection in capillary liquid chromatography.

    Science.gov (United States)

    Aggarwal, Pankaj; Liu, Kun; Sharma, Sonika; Lawson, John S; Dennis Tolley, H; Lee, Milton L

    2015-02-06

    Efficiency and resolution in capillary liquid chromatography (LC) can be significantly affected by extra-column band broadening, especially for isocratic separations. This is particularly a concern in evaluating column bed structure using non-retained test compounds. The band broadening due to an injector supplied with a commercially available capillary LC system was characterized from experimental measurements. The extra-column variance from the injection valve was found to have an extra-column contribution independent of the injection volume, showing an exponential dependence on flow rate. The overall extra-column variance from the injection valve was found to vary from 34 to 23 nL. A new mathematical model was derived that explains this exponential contribution of extra-column variance on chromatographic performance. The chromatographic efficiency was compromised by ∼130% for a non-retained analyte because of injection valve dead volume. The measured chromatographic efficiency was greatly improved when a new nano-flow pumping system with integrated injection valve was used. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Analytical results of variance reduction characteristics of biased Monte Carlo for deep-penetration problems

    International Nuclear Information System (INIS)

    Murthy, K.P.N.; Indira, R.

    1986-01-01

    An analytical formulation is presented for calculating the mean and variance of transmission for a model deep-penetration problem. With this formulation, the variance reduction characteristics of two biased Monte Carlo schemes are studied. The first is the usual exponential biasing wherein it is shown that the optimal biasing parameter depends sensitively on the scattering properties of the shielding medium. The second is a scheme that couples exponential biasing to the scattering angle biasing proposed recently. It is demonstrated that the coupled scheme performs better than exponential biasing

  4. The enhanced variance propagation code for the Idaho Chemical Processing Plant

    International Nuclear Information System (INIS)

    Kern, E.A.; Zack, N.R.; Britschgi, J.J.

    1992-01-01

    The Variance Propagation (VP) Code was developed by the Los Alamos National Laboratory's Safeguard's Systems Group to provide off-line variance propagation and systems analysis for nuclear material processing facilities. The code can also be used as a tool in the design and evaluation of material accounting systems. In this regard , the VP code was enhanced to incorporate a model of the material accountability measurements used in the Idaho Chemical Processing Plant operated by the Westinghouse Idaho Nuclear Company. Inputs to the code were structured to account for the dissolves/headend process, the waste streams, process performed to determine the sensitivity of measurement and sampling errors to the overall material balance error. We determined that the material balance error is very sensitive to changes in the sampling errors. 3 refs

  5. Empirical single sample quantification of bias and variance in Q-ball imaging.

    Science.gov (United States)

    Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A

    2018-02-06

    The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.

  6. The mean–variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI

    Science.gov (United States)

    Thompson, William H.; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed. PMID:26236216

  7. Event-Based Variance-Constrained ${\\mathcal {H}}_{\\infty }$ Filtering for Stochastic Parameter Systems Over Sensor Networks With Successive Missing Measurements.

    Science.gov (United States)

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2018-03-01

    This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.

  8. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  9. Variance stabilization for computing and comparing grand mean waveforms in MEG and EEG.

    Science.gov (United States)

    Matysiak, Artur; Kordecki, Wojciech; Sielużycki, Cezary; Zacharias, Norman; Heil, Peter; König, Reinhard

    2013-07-01

    Grand means of time-varying signals (waveforms) across subjects in magnetoencephalography (MEG) and electroencephalography (EEG) are commonly computed as arithmetic averages and compared between conditions, for example, by subtraction. However, the prerequisite for these operations, homogeneity of the variance of the waveforms in time, and for most common parametric statistical tests also between conditions, is rarely met. We suggest that the heteroscedasticity observed instead results because waveforms may differ by factors and additive terms and follow a mixed model. We propose to apply the asinh-transformation to stabilize the variance in such cases. We demonstrate the homogeneous variance and the normal distributions of data achieved by this transformation using simulated waveforms, and we apply it to real MEG data and show its benefits. The asinh-transformation is thus an essential and useful processing step prior to computing and comparing grand mean waveforms in MEG and EEG. Copyright © 2013 Society for Psychophysiological Research.

  10. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    Science.gov (United States)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  11. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  12. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  13. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  14. Sex Estimation From Modern American Humeri and Femora, Accounting for Sample Variance Structure

    DEFF Research Database (Denmark)

    Boldsen, J. L.; Milner, G. R.; Boldsen, S. K.

    2015-01-01

    several decades. Results: For measurements individually and collectively, the probabilities of being one sex or the other were generated for samples with an equal distribution of males and females, taking into account the variance structure of the original measurements. The combination providing the best......Objectives: A new procedure for skeletal sex estimation based on humeral and femoral dimensions is presented, based on skeletons from the United States. The approach specifically addresses the problem that arises from a lack of variance homogeneity between the sexes, taking into account prior...... information about the sample's sex ratio, if known. Material and methods: Three measurements useful for estimating the sex of adult skeletons, the humeral and femoral head diameters and the humeral epicondylar breadth, were collected from 258 Americans born between 1893 and 1980 who died within the past...

  15. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey

    2014-01-06

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  16. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey; Bierig, Claudio

    2014-01-01

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  17. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    Energy Technology Data Exchange (ETDEWEB)

    Song Ningfang; Yuan Rui; Jin Jing, E-mail: rayleing@139.com [School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191 (China)

    2011-09-15

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 {sup 0}/h{sup 2}, K = 1.1714exp-3 {sup 0}/h{sup 1.5}, B = 1.3185exp-3 {sup 0}/h, N = 5.982exp-4 {sup 0}/h{sup 0.5} and Q = 5.197exp-7 {sup 0} in real time, and tracks degradation of gyro performance from initail values, R = 0.651 {sup 0}/h{sup 2}, K = 0.801 {sup 0}/h{sup 1.5}, B = 0.385 {sup 0}/h, N = 0.0874 {sup 0}/h{sup 0.5} and Q = 8.085exp-5 {sup 0}, to final estimations, R = 9.548 {sup 0}/h{sup 2}, K = 9.524 {sup 0}/h{sup 1.5}, B = 2.234 {sup 0}/h, N = 0.5594 {sup 0}/h{sup 0.5} and Q = 5.113exp-4 {sup 0}, due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  18. Estimation of Genetic Variance Components Including Mutation and Epistasis using Bayesian Approach in a Selection Experiment on Body Weight in Mice

    DEFF Research Database (Denmark)

    Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke

    Selection experiment was performed for weight gain in 13 generations of outbred mice. A total of 18 lines were included in the experiment. Nine lines were allotted to each of the two treatment diets (19.3 and 5.1 % protein). Within each diet three lines were selected upwards, three lines were...... selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...

  19. A Mathematical Framework for Critical Transitions: Normal Forms, Variance and Applications

    Science.gov (United States)

    Kuehn, Christian

    2013-06-01

    Critical transitions occur in a wide variety of applications including mathematical biology, climate change, human physiology and economics. Therefore it is highly desirable to find early-warning signs. We show that it is possible to classify critical transitions by using bifurcation theory and normal forms in the singular limit. Based on this elementary classification, we analyze stochastic fluctuations and calculate scaling laws of the variance of stochastic sample paths near critical transitions for fast-subsystem bifurcations up to codimension two. The theory is applied to several models: the Stommel-Cessi box model for the thermohaline circulation from geoscience, an epidemic-spreading model on an adaptive network, an activator-inhibitor switch from systems biology, a predator-prey system from ecology and to the Euler buckling problem from classical mechanics. For the Stommel-Cessi model we compare different detrending techniques to calculate early-warning signs. In the epidemics model we show that link densities could be better variables for prediction than population densities. The activator-inhibitor switch demonstrates effects in three time-scale systems and points out that excitable cells and molecular units have information for subthreshold prediction. In the predator-prey model explosive population growth near a codimension-two bifurcation is investigated and we show that early-warnings from normal forms can be misleading in this context. In the biomechanical model we demonstrate that early-warning signs for buckling depend crucially on the control strategy near the instability which illustrates the effect of multiplicative noise.

  20. Heterogeneity of variance components for preweaning growth in Romane sheep due to the number of lambs reared

    Directory of Open Access Journals (Sweden)

    Poivey Jean-Paul

    2011-09-01

    Full Text Available Abstract Background The pre-weaning growth rate of lambs, an important component of meat market production, is affected by maternal and direct genetic effects. The French genetic evaluation model takes into account the number of lambs suckled by applying a multiplicative factor (1 for a lamb reared as a single, 0.7 for twin-reared lambs to the maternal genetic effect, in addition to including the birth*rearing type combination as a fixed effect, which acts on the mean. However, little evidence has been provided to justify the use of this multiplicative model. The two main objectives of the present study were to determine, by comparing models of analysis, 1 whether pre-weaning growth is the same trait in single- and twin-reared lambs and 2 whether the multiplicative coefficient represents a good approach for taking this possible difference into account. Methods Data on the pre-weaning growth rate, defined as the average daily gain from birth to 45 days of age on 29,612 Romane lambs born between 1987 and 2009 at the experimental farm of La Sapinière (INRA-France were used to compare eight models that account for the number of lambs per dam reared in various ways. Models were compared using the Akaike information criteria. Results The model that best fitted the data assumed that 1 direct (maternal effects correspond to the same trait regardless of the number of lambs reared, 2 the permanent environmental effects and variances associated with the dam depend on the number of lambs reared and 3 the residual variance depends on the number of lambs reared. Even though this model fitted the data better than a model that included a multiplicative coefficient, little difference was found between EBV from the different models (the correlation between EBV varied from 0.979 to 0.999. Conclusions Based on experimental data, the current genetic evaluation model can be improved to better take into account the number of lambs reared. Thus, it would be of

  1. The variance of dispersion measure of high-redshift transient objects as a probe of ionized bubble size during reionization

    Science.gov (United States)

    Yoshiura, Shintaro; Takahashi, Keitaro

    2018-01-01

    The dispersion measure (DM) of high-redshift (z ≳ 6) transient objects such as fast radio bursts can be a powerful tool to probe the intergalactic medium during the Epoch of Reionization. In this paper, we study the variance of the DMs of objects with the same redshift as a potential probe of the size distribution of ionized bubbles. We calculate the DM variance with a simple model with randomly distributed spherical bubbles. It is found that the DM variance reflects the characteristics of the probability distribution of the bubble size. We find that the variance can be measured precisely enough to obtain the information on the typical size with a few hundred sources at a single redshift.

  2. Comment on "Relative variance of the mean squared pressure in multimode media: rehabilitating former approaches" [J. Acoust. Soc. Am. 136, 2621-2629 (2014)].

    Science.gov (United States)

    Davy, John L; Weaver, Richard L

    2015-03-01

    Models for the statistics of responses in finite reverberant structures, and in particular, for the variance of the mean square pressure in reverberation rooms, have been studied for decades. It is therefore surprising that a recent communication has claimed that the literature has gotten the simplest of such calculations very wrong. Monsef, Cozza, Rodrigues, Cellard, and Durocher [(2014). J. Acoust. Soc. Am. 136, 2621-2629] have derived a modal-based expression for the relative variance that differs significantly from expressions that have been accepted since 1969. This Comment points out that the Monsef formula is clearly incorrect, and then for the interested reader, points out the subtle place where they made their mistake.

  3. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  4. Numerical simulation of variance of solar radiation and its influence on wheat growth

    Science.gov (United States)

    Zhang, Xuefen; Wang, Chunyi; Du, Zixuan; Zhai, Wei

    2007-09-01

    The growth of crops is directly related to solar radiation whose variances influence the photosynthesis of crops and the growth momentum thereof. This dissertation has Zhengzhou, which located in the Huanghuai Farmland Ecological System of China, as an example to analyze the rules of variances of total solar radiation, direct radiation and diffusive radiation. With the help of linear trend fitting, it is identified that total radiation (TR) drops as a whole at a rate of 1.6482J/m2. Such drop has been particularly apparent in recent years with a period of 7 to 16 years; diffusive radiation (DF) tends to increase at a rate of 15.149 J/m2 with a period of 20 years; direct radiation (DR) tends to drop at a rate of 15.843 J/m2 without apparent period. The total radiation has been on the decrease ever since 1980 during the growth period of wheat. Having modified relevant Parameter in the Carbon and Nitrogen Biogeochemistry in Agroecosystems Model (DNDC) model and simulated the influence of solar radiation variances on the development phase, leaf area index (LAI), grain weight, etc during the growth period of wheat, it is found that solar radiation is in positive proportion to LAI and grain weight (GRNWT) but not apparently related to development phase (DP). The change of total radiation delays the maximization of wheat LAI, reduces wheat LAI before winter but has no apparent effect in winter and decreases wheat LAI from jointing period to filling period; it has no apparent influence on grain formation at the early stage of grain formation, slows down the weight increase of grains during the filling period and accelerates the weight increase of grains at the end of filling period. Variance of radiations does not affect the DP of wheat much.

  5. Task-based dermal exposure models for regulatory risk assessment.

    Science.gov (United States)

    Warren, Nicholas D; Marquart, Hans; Christopher, Yvette; Laitinen, Juha; VAN Hemmen, Joop J

    2006-07-01

    The regulatory risk assessment of chemicals requires the estimation of occupational dermal exposure. Until recently, the models used were either based on limited data or were specific to a particular class of chemical or application. The EU project RISKOFDERM has gathered a considerable number of new measurements of dermal exposure together with detailed contextual information. This article describes the development of a set of generic task-based models capable of predicting potential dermal exposure to both solids and liquids in a wide range of situations. To facilitate modelling of the wide variety of dermal exposure situations six separate models were made for groupings of exposure scenarios called Dermal Exposure Operation units (DEO units). These task-based groupings cluster exposure scenarios with regard to the expected routes of dermal exposure and the expected influence of exposure determinants. Within these groupings linear mixed effect models were used to estimate the influence of various exposure determinants and to estimate components of variance. The models predict median potential dermal exposure rates for the hands and the rest of the body from the values of relevant exposure determinants. These rates are expressed as mg or microl product per minute. Using these median potential dermal exposure rates and an accompanying geometric standard deviation allows a range of exposure percentiles to be calculated.

  6. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  7. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    Science.gov (United States)

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533

  8. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  9. Comparisons and Characterizations of the Mean-Variance, Mean-VaR, Mean-CVaR Models for Portfolio Selection With Background Risk

    OpenAIRE

    Xu, Guo; Wing-Keung, Wong; Lixing, Zhu

    2013-01-01

    This paper investigates the impact of background risk on an investor’s portfolio choice in a mean-VaR, mean-CVaR and mean-variance framework, and analyzes the characterizations of the mean-variance boundary and mean-VaR efficient frontier in the presence of background risk. We also consider the case with a risk-free security.

  10. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  11. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  12. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  13. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    Science.gov (United States)

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  14. Heterogeneous network epidemics: real-time growth, variance and extinction of infection.

    Science.gov (United States)

    Ball, Frank; House, Thomas

    2017-09-01

    Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.

  15. The problem of low variance voxels in statistical parametric mapping; a new hat avoids a 'haircut'.

    Science.gov (United States)

    Ridgway, Gerard R; Litvak, Vladimir; Flandin, Guillaume; Friston, Karl J; Penny, Will D

    2012-02-01

    Statistical parametric mapping (SPM) locates significant clusters based on a ratio of signal to noise (a 'contrast' of the parameters divided by its standard error) meaning that very low noise regions, for example outside the brain, can attain artefactually high statistical values. Similarly, the commonly applied preprocessing step of Gaussian spatial smoothing can shift the peak statistical significance away from the peak of the contrast and towards regions of lower variance. These problems have previously been identified in positron emission tomography (PET) (Reimold et al., 2006) and voxel-based morphometry (VBM) (Acosta-Cabronero et al., 2008), but can also appear in functional magnetic resonance imaging (fMRI) studies. Additionally, for source-reconstructed magneto- and electro-encephalography (M/EEG), the problems are particularly severe because sparsity-favouring priors constrain meaningfully large signal and variance to a small set of compactly supported regions within the brain. (Acosta-Cabronero et al., 2008) suggested adding noise to background voxels (the 'haircut'), effectively increasing their noise variance, but at the cost of contaminating neighbouring regions with the added noise once smoothed. Following theory and simulations, we propose to modify--directly and solely--the noise variance estimate, and investigate this solution on real imaging data from a range of modalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Robust Markowitz mean-variance portfolio selection under ambiguous covariance matrix *

    OpenAIRE

    Ismail, Amine; Pham, Huyên

    2016-01-01

    This paper studies a robust continuous-time Markowitz portfolio selection pro\\-blem where the model uncertainty carries on the covariance matrix of multiple risky assets. This problem is formulated into a min-max mean-variance problem over a set of non-dominated probability measures that is solved by a McKean-Vlasov dynamic programming approach, which allows us to characterize the solution in terms of a Bellman-Isaacs equation in the Wasserstein space of probability measures. We provide expli...

  17. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  18. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  19. The impact of covariance misspecification in group-based trajectory models for longitudinal data with non-stationary covariance structure.

    Science.gov (United States)

    Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C

    2017-08-01

    One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.

  20. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.