WorldWideScience

Sample records for mean-variance smoothing method

  1. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    Science.gov (United States)

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  2. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  3. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  4. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Pindoriya, N.M.; Singh, S.N. [Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Singh, S.K. [Indian Institute of Management Lucknow, Lucknow 226013 (India)

    2010-10-15

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  5. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    International Nuclear Information System (INIS)

    Pindoriya, N.M.; Singh, S.N.; Singh, S.K.

    2010-01-01

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  6. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  7. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  8. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  9. Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods

    NARCIS (Netherlands)

    Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.

    2002-01-01

    If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been

  10. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  11. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Yu, Zuwei

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets

  12. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Zuwei Yu

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  13. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zuwei Yu [Purdue University, West Lafayette, IN (United States). Indiana State Utility Forecasting Group and School of Industrial Engineering

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  14. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zuwei [Indiana State Utility Forecasting Group and School of Industrial Engineering, Purdue University, Room 334, 1293 A.A. Potter, West Lafayette, IN 47907 (United States)

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets.

  15. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  16. Study on variance-to-mean method as subcriticality monitor for accelerator driven system operated with pulse-mode

    International Nuclear Information System (INIS)

    Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu

    2003-01-01

    Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)

  17. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  18. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  19. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  20. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  1. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  2. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  3. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  4. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  5. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  6. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  7. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    Science.gov (United States)

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  8. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  9. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  10. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  11. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  12. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  13. Impact of spectral smoothing on gamma radiation portal alarm probabilities

    International Nuclear Information System (INIS)

    Burr, T.; Hamada, M.; Hengartner, N.

    2011-01-01

    Gamma detector counts are included in radiation portal monitors (RPM) to screen for illicit nuclear material. Gamma counts are sometimes smoothed to reduce variance in the estimated underlying true mean count rate, which is the 'signal' in our context. Smoothing reduces total error variance in the estimated signal if the bias that smoothing introduces is more than offset by the variance reduction. An empirical RPM study for vehicle screening applications is presented for unsmoothed and smoothed gamma counts in low-resolution plastic scintillator detectors and in medium-resolution NaI detectors. - Highlights: → We evaluate options for smoothing counts from gamma detectors deployed for portal monitoring. → A new multiplicative bias correction (MBC) is shown to reduce bias in peak and valley regions. → Performance is measured using mean squared error and detection probabilities for sources. → Smoothing with the MBC improves detection probabilities and the mean squared error.

  14. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  15. Mean-Variance Portfolio Selection with Margin Requirements

    Directory of Open Access Journals (Sweden)

    Yuan Zhou

    2013-01-01

    Full Text Available We study the continuous-time mean-variance portfolio selection problem in the situation when investors must pay margin for short selling. The problem is essentially a nonlinear stochastic optimal control problem because the coefficients of positive and negative parts of control variables are different. We can not apply the results of stochastic linearquadratic (LQ problem. Also the solution of corresponding Hamilton-Jacobi-Bellman (HJB equation is not smooth. Li et al. (2002 studied the case when short selling is prohibited; therefore they only need to consider the positive part of control variables, whereas we need to handle both the positive part and the negative part of control variables. The main difficulty is that the positive part and the negative part are not independent. The previous results are not directly applicable. By decomposing the problem into several subproblems we figure out the solutions of HJB equation in two disjoint regions and then prove it is the viscosity solution of HJB equation. Finally we formulate solution of optimal portfolio and the efficient frontier. We also present two examples showing how different margin rates affect the optimal solutions and the efficient frontier.

  16. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  17. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  18. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  19. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  20. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  1. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  2. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  3. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  4. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  5. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  6. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  7. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    Science.gov (United States)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  8. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  9. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  10. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Science.gov (United States)

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  11. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Directory of Open Access Journals (Sweden)

    Liyun Zhuang

    2017-01-01

    Full Text Available This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE, which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE. Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  12. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance.

    Science.gov (United States)

    Zhuang, Liyun; Guan, Yepeng

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  13. Worst-case and smoothed analysis of k-means clustering with Bregman divergences

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, H.

    2013-01-01

    The $k$-means method is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice despite its exponential worst-case running-time. To narrow the gap between theory and practice, $k$-means has been studied in the semi-random input model of smoothed

  14. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  15. A flexible model for the mean and variance functions, with application to medical cost data.

    Science.gov (United States)

    Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T

    2013-10-30

    Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.

  16. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  17. Smoothed analysis of the k-means method

    NARCIS (Netherlands)

    Arthur, David; Manthey, Bodo; Röglin, Heiko

    2011-01-01

    The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means

  18. On the mean and variance of the writhe of random polygons

    International Nuclear Information System (INIS)

    Portillo, J; Scharein, R; Arsuaga, J; Vazquez, M; Diao, Y

    2011-01-01

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an 'ideal' conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon.

  19. On the mean and variance of the writhe of random polygons.

    Science.gov (United States)

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  20. Comparisons and Characterizations of the Mean-Variance, Mean-VaR, Mean-CVaR Models for Portfolio Selection With Background Risk

    OpenAIRE

    Xu, Guo; Wing-Keung, Wong; Lixing, Zhu

    2013-01-01

    This paper investigates the impact of background risk on an investor’s portfolio choice in a mean-VaR, mean-CVaR and mean-variance framework, and analyzes the characterizations of the mean-variance boundary and mean-VaR efficient frontier in the presence of background risk. We also consider the case with a risk-free security.

  1. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    Science.gov (United States)

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  2. Mean--variance portfolio optimization when means and covariances are unknown

    OpenAIRE

    Tze Leung Lai; Haipeng Xing; Zehao Chen

    2011-01-01

    Markowitz's celebrated mean--variance portfolio optimization theory assumes that the means and covariances of the underlying asset returns are known. In practice, they are unknown and have to be estimated from historical data. Plugging the estimates into the efficient frontier that assumes known parameters has led to portfolios that may perform poorly and have counter-intuitive asset allocation weights; this has been referred to as the "Markowitz optimization enigma." After reviewing differen...

  3. Assessment of texture stationarity using the asymptotic behavior of the empirical mean and variance.

    Science.gov (United States)

    Blanc, Rémy; Da Costa, Jean-Pierre; Stitou, Youssef; Baylou, Pierre; Germain, Christian

    2008-09-01

    Given textured images considered as realizations of 2-D stochastic processes, a framework is proposed to evaluate the stationarity of their mean and variance. Existing strategies focus on the asymptotic behavior of the empirical mean and variance (respectively EM and EV), known for some types of nondeterministic processes. In this paper, the theoretical asymptotic behaviors of the EM and EV are studied for large classes of second-order stationary ergodic processes, in the sense of the Wold decomposition scheme, including harmonic and evanescent processes. Minimal rates of convergence for the EM and the EV are derived for these processes; they are used as criteria for assessing the stationarity of textures. The experimental estimation of the rate of convergence is achieved using a nonparametric block sub-sampling method. Our framework is evaluated on synthetic processes with stationary or nonstationary mean and variance and on real textures. It is shown that anomalies in the asymptotic behavior of the empirical estimators allow detecting nonstationarities of the mean and variance of the processes in an objective way.

  4. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  5. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.

    Science.gov (United States)

    Xie, Xianchao; Kou, S C; Brown, Lawrence

    2016-03-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.

  6. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  7. DIFFERENCES BETWEEN MEAN-VARIANCE AND MEAN-CVAR PORTFOLIO OPTIMIZATION MODELS

    Directory of Open Access Journals (Sweden)

    Panna Miskolczi

    2016-07-01

    Full Text Available Everybody heard already that one should not expect high returns without high risk, or one should not expect safety without low returns. The goal of portfolio theory is to find the balance between maximizing the return and minimizing the risk. To do so we have to first understand and measure the risk. Naturally a good risk measure has to satisfy several properties - in theory and in practise. Markowitz suggested to use the variance as a risk measure in portfolio theory. This led to the so called mean-variance model - for which Markowitz received the Nobel Prize in 1990. The model has been criticized because it is well suited for elliptical distributions but it may lead to incorrect conclusions in the case of non-elliptical distributions. Since then many risk measures have been introduced, of which the Value at Risk (VaR is the most widely used in the recent years. Despite of the widespread use of the Value at Risk there are some fundamental problems with it. It does not satisfy the subadditivity property and it ignores the severity of losses in the far tail of the profit-and-loss (P&L distribution. Moreover, its non-convexity makes VaR impossible to use in optimization problems. To come over these issues the Expected Shortfall (ES as a coherent risk measure was developed. Expected Shortfall is also called Conditional Value at Risk (CVaR. Compared to Value at Risk, ES is more sensitive to the tail behaviour of the P&L distribution function. In the first part of the paper I state the definition of these three risk measures. In the second part I deal with my main question: What is happening if we replace the variance with the Expected Shortfall in the portfolio optimization process. Do we have different optimal portfolios as a solution? And thus, does the solution suggests to decide differently in the two cases? To answer to these questions I analyse seven Hungarian stock exchange companies. First I use the mean-variance portfolio optimization model

  8. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  9. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  10. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  11. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  12. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  13. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  14. Mean-Coherent Risk and Mean-Variance Approaches in Portfolio Selection : An Empirical Comparison

    NARCIS (Netherlands)

    Polbennikov, S.Y.; Melenberg, B.

    2005-01-01

    We empirically analyze the implementation of coherent risk measures in portfolio selection.First, we compare optimal portfolios obtained through mean-coherent risk optimization with corresponding mean-variance portfolios.We find that, even for a typical portfolio of equities, the outcomes can be

  15. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  16. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  17. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  18. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  19. Smoothing the payoff for efficient computation of Basket option prices

    KAUST Repository

    Bayer, Christian

    2017-07-22

    We consider the problem of pricing basket options in a multivariate Black–Scholes or Variance-Gamma model. From a numerical point of view, pricing such options corresponds to moderate and high-dimensional numerical integration problems with non-smooth integrands. Due to this lack of regularity, higher order numerical integration techniques may not be directly available, requiring the use of methods like Monte Carlo specifically designed to work for non-regular problems. We propose to use the inherent smoothing property of the density of the underlying in the above models to mollify the payoff function by means of an exact conditional expectation. The resulting conditional expectation is unbiased and yields a smooth integrand, which is amenable to the efficient use of adaptive sparse-grid cubature. Numerical examples indicate that the high-order method may perform orders of magnitude faster than Monte Carlo or Quasi Monte Carlo methods in dimensions up to 35.

  20. On Mean-Variance Hedging of Bond Options with Stochastic Risk Premium Factor

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Kumar, Suresh K.

    2014-01-01

    We consider the mean-variance hedging problem for pricing bond options using the yield curve as the observation. The model considered contains infinite-dimensional noise sources with the stochastically- varying risk premium. Hence our model is incomplete. We consider mean-variance hedging under the

  1. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  2. Excluded-Mean-Variance Neural Decision Analyzer for Qualitative Group Decision Making

    Directory of Open Access Journals (Sweden)

    Ki-Young Song

    2012-01-01

    Full Text Available Many qualitative group decisions in professional fields such as law, engineering, economics, psychology, and medicine that appear to be crisp and certain are in reality shrouded in fuzziness as a result of uncertain environments and the nature of human cognition within which the group decisions are made. In this paper we introduce an innovative approach to group decision making in uncertain situations by using a mean-variance neural approach. The key idea of this proposed approach is to compute the excluded mean of individual evaluations and weight it by applying a variance influence function (VIF; this process of weighting the excluded mean by VIF provides an improved result in the group decision making. In this paper, a case study with the proposed excluded-mean-variance approach is also presented. The results of this case study indicate that this proposed approach can improve the effectiveness of qualitative decision making by providing the decision maker with a new cognitive tool to assist in the reasoning process.

  3. Comparison of some nonlinear smoothing methods

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1977-01-01

    Due to the poor quality of many nuclear medicine images, computer-driven smoothing procedures are frequently employed to enhance the diagnostic utility of these images. While linear methods were first tried, it was discovered that nonlinear techniques produced superior smoothing with little detail suppression. We have compared four methods: Gaussian smoothing (linear), two-dimensional least-squares smoothing (linear), two-dimensional least-squares bounding (nonlinear), and two-dimensional median smoothing (nonlinear). The two dimensional least-squares procedures have yielded the most satisfactorily enhanced images, with the median smoothers providing quite good images, even in the presence of widely aberrant points

  4. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  5. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  6. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  7. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  8. Improved smoothed analysis of the k-means method

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko; Mathieu, C.

    2009-01-01

    The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii [3] aimed at closing this gap, and they

  9. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  10. A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2015-01-01

    Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.

  11. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  12. Mean and variance evolutions of the hot and cold temperatures in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Parey, Sylvie [EDF/R and D, Chatou Cedex (France); Dacunha-Castelle, D. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); Hoang, T.T.H. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); EDF/R and D, Chatou Cedex (France)

    2010-02-15

    In this paper, we examine the trends of temperature series in Europe, for the mean as well as for the variance in hot and cold seasons. To do so, we use as long and homogenous series as possible, provided by the European Climate Assessment and Dataset project for different locations in Europe, as well as the European ENSEMBLES project gridded dataset and the ERA40 reanalysis. We provide a definition of trends that we keep as intrinsic as possible and apply non-parametric statistical methods to analyse them. Obtained results show a clear link between trends in mean and variance of the whole series of hot or cold temperatures: in general, variance increases when the absolute value of temperature increases, i.e. with increasing summer temperature and decreasing winter temperature. This link is reinforced in locations where winter and summer climate has more variability. In very cold or very warm climates, the variability is lower and the link between the trends is weaker. We performed the same analysis on outputs of six climate models proposed by European teams for the 1961-2000 period (1950-2000 for one model), available through the PCMDI portal for the IPCC fourth assessment climate model simulations. The models generally perform poorly and have difficulties in capturing the relation between the two trends, especially in summer. (orig.)

  13. Neutron Transport in Spatially Random Media: An Assessment of the Accuracy of First Order Smoothing

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2000-01-01

    A formalism has been developed for studying the transmission of neutrons through a spatially stochastic medium. The stochastic components are represented by absorbing plates of randomly varying strength and random position. This type of geometry enables the Feinberg-Galanin-Horning method to be employed and leads to the solution of a coupled set of linear equations for the flux at the plate positions. The matrix of the coefficients contains members that are random and these are solved by simulation. That is, the strength and plate positions are sampled from uniform distributions and the equations solved many times (in this case 10 5 simulations are carried out). Probability distributions for the plate transmission and reflection factors are constructed from which the mean and variance can be computed.These essentially exact solutions enable closure approximations to be assessed for accuracy. To this end, we have compared the mean and variance obtained from the first order smoothing approximation of Keller with the exact results and have found excellent agreement for the mean values but note deviations of up to 40% for the variance. Nevertheless, for the problems considered here, first order smoothing appears to be of practical value and is very efficient numerically in comparison with simulation

  14. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    Science.gov (United States)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  15. Effects of Diversification of Assets on Mean and Variance | Jayeola ...

    African Journals Online (AJOL)

    Diversification is a means of minimizing risk and maximizing returns by investing in a variety of assets of the portfolio. This paper is written to determine the effects of diversification of three types of Assets; uncorrelated, perfectly correlated and perfectly negatively correlated assets on mean and variance. To go about this, ...

  16. Mean-variance portfolio optimization with state-dependent risk aversion

    DEFF Research Database (Denmark)

    Bjoerk, Tomas; Murgoci, Agatha; Zhou, Xun Yu

    2014-01-01

    The objective of this paper is to study the mean-variance portfolio optimization in continuous time. Since this problem is time inconsistent we attack it by placing the problem within a game theoretic framework and look for subgame perfect Nash equilibrium strategies. This particular problem has...

  17. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  18. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  19. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    OpenAIRE

    Hoejgaard, B.; Vigna, E.

    2007-01-01

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean-variance approach is then compared to other investment strategies adopted in DC pension schemes, namely the target-based approach and the lifestyle strategy. The comparison is done both in a theoretical...

  20. An adaptive segment method for smoothing lidar signal based on noise estimation

    Science.gov (United States)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  1. Investor preferences for oil spot and futures based on mean-variance and stochastic dominance

    NARCIS (Netherlands)

    H.H. Lean (Hooi Hooi); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractThis paper examines investor preferences for oil spot and futures based on mean-variance (MV) and stochastic dominance (SD). The mean-variance criterion cannot distinct the preferences of spot and market whereas SD tests leads to the conclusion that spot dominates futures in the downside

  2. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.

    Science.gov (United States)

    Pressley, Joanna; Troyer, Todd W

    2009-07-01

    In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.

  4. Directly measuring mean and variance of infinite-spectrum observables such as the photon orbital angular momentum.

    Science.gov (United States)

    Piccirillo, Bruno; Slussarenko, Sergei; Marrucci, Lorenzo; Santamato, Enrico

    2015-10-19

    The standard method for experimentally determining the probability distribution of an observable in quantum mechanics is the measurement of the observable spectrum. However, for infinite-dimensional degrees of freedom, this approach would require ideally infinite or, more realistically, a very large number of measurements. Here we consider an alternative method which can yield the mean and variance of an observable of an infinite-dimensional system by measuring only a two-dimensional pointer weakly coupled with the system. In our demonstrative implementation, we determine both the mean and the variance of the orbital angular momentum of a light beam without acquiring the entire spectrum, but measuring the Stokes parameters of the optical polarization (acting as pointer), after the beam has suffered a suitable spin-orbit weak interaction. This example can provide a paradigm for a new class of useful weak quantum measurements.

  5. Optimal control of LQG problem with an explicit trade-off between mean and variance

    Science.gov (United States)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  6. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  8. The mean and variance of phylogenetic diversity under rarefaction

    OpenAIRE

    Nipperess, David A.; Matsen, Frederick A.

    2013-01-01

    Phylogenetic diversity (PD) depends on sampling intensity, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD. We have derived exact formulae for t...

  9. Mean-downside risk versus mean-variance efficient asset class allocations in relation to the investment horizon

    NARCIS (Netherlands)

    Ruiter, de A.J.C.; Brouwer, F.

    1996-01-01

    In this paper we examine the difference between a Mean-Downside Risk (MDR) based asset allocation decision and a Mean-Variance (MV) based decision. Using a vector autoregressive specification, future return series, trom 1 month up to 10 years, of several US stock and bond asset classes have been

  10. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  11. History based batch method preserving tally means

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Choi, Sung Hoon

    2012-01-01

    In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available

  12. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  13. Time-Consistent Strategies for a Multiperiod Mean-Variance Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Huiling Wu

    2013-01-01

    Full Text Available It remained prevalent in the past years to obtain the precommitment strategies for Markowitz's mean-variance portfolio optimization problems, but not much is known about their time-consistent strategies. This paper takes a step to investigate the time-consistent Nash equilibrium strategies for a multiperiod mean-variance portfolio selection problem. Under the assumption that the risk aversion is, respectively, a constant and a function of current wealth level, we obtain the explicit expressions for the time-consistent Nash equilibrium strategy and the equilibrium value function. Many interesting properties of the time-consistent results are identified through numerical sensitivity analysis and by comparing them with the classical pre-commitment solutions.

  14. The mean and variance of environmental temperature interact to determine physiological tolerance and fitness.

    Science.gov (United States)

    Bozinovic, Francisco; Bastías, Daniel A; Boher, Francisca; Clavijo-Baquet, Sabrina; Estay, Sergio A; Angilletta, Michael J

    2011-01-01

    Global climate change poses one of the greatest threats to biodiversity. Most analyses of the potential biological impacts have focused on changes in mean temperature, but changes in thermal variance will also impact organisms and populations. We assessed the combined effects of the mean and variance of temperature on thermal tolerances, organismal survival, and population growth in Drosophila melanogaster. Because the performance of ectotherms relates nonlinearly to temperature, we predicted that responses to thermal variation (±0° or ±5°C) would depend on the mean temperature (17° or 24°C). Consistent with our prediction, thermal variation enhanced the rate of population growth (r(max)) at a low mean temperature but depressed this rate at a high mean temperature. The interactive effect on fitness occurred despite the fact that flies improved their heat and cold tolerances through acclimation to thermal conditions. Flies exposed to a high mean and a high variance of temperature recovered from heat coma faster and survived heat exposure better than did flies that developed at other conditions. Relatively high survival following heat exposure was associated with low survival following cold exposure. Recovery from chill coma was affected primarily by the mean temperature; flies acclimated to a low mean temperature recovered much faster than did flies acclimated to a high mean temperature. To develop more realistic predictions about the biological impacts of climate change, one must consider the interactions between the mean environmental temperature and the variance of environmental temperature.

  15. The influence of mean climate trends and climate variance on beaver survival and recruitment dynamics.

    Science.gov (United States)

    Campbell, Ruairidh D; Nouvellet, Pierre; Newman, Chris; Macdonald, David W; Rosell, Frank

    2012-09-01

    Ecologists are increasingly aware of the importance of environmental variability in natural systems. Climate change is affecting both the mean and the variability in weather and, in particular, the effect of changes in variability is poorly understood. Organisms are subject to selection imposed by both the mean and the range of environmental variation experienced by their ancestors. Changes in the variability in a critical environmental factor may therefore have consequences for vital rates and population dynamics. Here, we examine ≥90-year trends in different components of climate (precipitation mean and coefficient of variation (CV); temperature mean, seasonal amplitude and residual variance) and consider the effects of these components on survival and recruitment in a population of Eurasian beavers (n = 242) over 13 recent years. Within climatic data, no trends in precipitation were detected, but trends in all components of temperature were observed, with mean and residual variance increasing and seasonal amplitude decreasing over time. A higher survival rate was linked (in order of influence based on Akaike weights) to lower precipitation CV (kits, juveniles and dominant adults), lower residual variance of temperature (dominant adults) and lower mean precipitation (kits and juveniles). No significant effects were found on the survival of nondominant adults, although the sample size for this category was low. Greater recruitment was linked (in order of influence) to higher seasonal amplitude of temperature, lower mean precipitation, lower residual variance in temperature and higher precipitation CV. Both climate means and variance, thus proved significant to population dynamics; although, overall, components describing variance were more influential than those describing mean values. That environmental variation proves significant to a generalist, wide-ranging species, at the slow end of the slow-fast continuum of life histories, has broad implications for

  16. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  17. A Mean-Variance Explanation of FDI Flows to Developing Countries

    DEFF Research Database (Denmark)

    Sunesen, Eva Rytter

    country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

  18. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  19. Diffusion tensor smoothing through weighted Karcher means

    Science.gov (United States)

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2014-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors– 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  20. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    Science.gov (United States)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  1. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  2. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    International Nuclear Information System (INIS)

    Lee, Kye Hyung; Im, Se Yong; Lim, Jae Hyuk; Sohn, Dong Woo

    2015-01-01

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  3. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kye Hyung; Im, Se Yong [KAIST, Daejeon (Korea, Republic of); Lim, Jae Hyuk [KARI, Daejeon (Korea, Republic of); Sohn, Dong Woo [Korea Maritime and Ocean University, Busan (Korea, Republic of)

    2015-02-15

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  4. Feynman variance-to-mean in the context of passive neutron coincidence counting

    Energy Technology Data Exchange (ETDEWEB)

    Croft, S., E-mail: scroft@lanl.gov [Los Alamos National Laboratory, PO Box 1663, Los Alamos, NM 87545 (United States); Favalli, A.; Hauck, D.K.; Henzlova, D.; Santi, P.A. [Los Alamos National Laboratory, PO Box 1663, Los Alamos, NM 87545 (United States)

    2012-09-11

    Passive Neutron Coincidence Counting (PNCC) based on shift register autocorrelation time analysis of the detected neutron pulse train is an important Nondestructive Assay (NDA) method. It is used extensively in the quantification of plutonium and other spontaneously fissile materials for purposes of nuclear materials accountancy. In addition to the totals count rate, which is also referred to as the singles, gross or trigger rate, a quantity known as the reals coincidence rate, also called the pairs or doubles, is obtained from the difference between the measured neutron multiplicities in two measurement gates triggered by the incoming events on the pulse train. The reals rate is a measure of the number of time correlated pairs present on the pulse train and this can be related to the fission rates (and hence material mass) since fissions emit neutrons in bursts which are also detected in characteristic clusters. A closely related measurement objective is the determination of the reactivity of systems as they approach criticality. In this field an alternative autocorrelation signature is popular, the so called Feynman variance-to-mean technique which makes use of the multiplicity histogram formed the periodic, or clock-triggered opening of a coincidence gate. Workers in these two application areas share common challenges and improvement opportunities but are often separated by tradition, problem focus and technical language. The purpose of this paper is to recognize the close link between the Feynman variance-to-mean metric and traditional PNCC using shift register logic applied to correlated pulse trains. We, show using relationships for the late-gate (or accidentals) histogram recorded using a multiplicity shift register, how the Feynman Y-statistic, defined as the excess variance-to-mean ratio, can be expressed in terms of the singles and doubles rates familiar to the safeguards and waste assay communities. These two specialisms now have a direct bridge between

  5. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    Science.gov (United States)

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  6. Mean-Variance portfolio optimization when each asset has individual uncertain exit-time

    Directory of Open Access Journals (Sweden)

    Reza Keykhaei

    2016-12-01

    Full Text Available The standard Markowitz Mean-Variance optimization model is a single-period portfolio selection approach where the exit-time (or the time-horizon is deterministic. ‎In this paper we study the Mean-Variance portfolio selection problem ‎with ‎uncertain ‎exit-time ‎when ‎each ‎has ‎individual uncertain ‎xit-time‎, ‎which generalizes the Markowitz's model‎. ‎‎‎‎‎‎We provide some conditions under which the optimal portfolio of the generalized problem is independent of the exit-times distributions. Also, ‎‎it is shown that under some general circumstances, the sets of optimal portfolios‎ ‎in the generalized model and the standard model are the same‎.

  7. Model Optimisasi Portofolio Investasi Mean-Variance Tanpa dan Dengan Aset Bebas Risiko pada Saham Idx30

    Directory of Open Access Journals (Sweden)

    Basuki Basuki

    2017-07-01

    Full Text Available Dalam paper ini, model optimisasi portofolio investasi Mean-Variance tanpa aset bebas risiko, atau disebut model dasar dari Markowitz telah dikaji untuk mendapatkan portofolio optimum.Berdasarkan model dasar dari Markowitz, kemudian dilakukan studi lebih lanjut pada model Mean-Variance dengan aset bebas risiko. Selanjutnya, kedua model tersebut digunakan untuk menganalisis optimisasi portofolio investasi pada beberapa saham IDX30. Dalam paper ini diasumsikan bahwa proporsi sebesar 10% diinvestasikan pada aset bebas risiko, berupa deposito yang memberikan return sebesar 7% per tahun. Berdasarkan hasil analisis optimisasi portofolio investasi pada lima saham yang dipilih didapatkan grafik permukaan efisien dari optimisasi portofolio Mean-Variance dengan aset bebas risiko, berada lebih tinggi dibandingkan optimisasi portofolio Mean-Variance tanpa aset bebas risiko. Dalam hal ini menunjukkan bahwa portofolio investasi kombinasi dari aset bebas risiko dan aset tanpa bebas risiko, lebih menguntungkan dibandingkan portofolio investasi yang hanya pada aset tanpa bebas risiko.

  8. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  9. Non-Linear Transaction Costs Inclusion in Mean-Variance Optimization

    Directory of Open Access Journals (Sweden)

    Christian Johannes Zimmer

    2005-12-01

    Full Text Available In this article we propose a new way to include transaction costs into a mean-variance portfolio optimization. We consider brokerage fees, bid/ask spread and the market impact of the trade. A pragmatic algorithm is proposed, which approximates the optimal portfolio, and we can show that is converges in the absence of restrictions. Using Brazilian financial market data we compare our approximation algorithm with the results of a non-linear optimizer.

  10. Homogeneity tests for variances and mean test under heterogeneity conditions in a single way ANOVA method

    International Nuclear Information System (INIS)

    Morales P, J.R.; Avila P, P.

    1996-01-01

    If we have consider the maximum permissible levels showed for the case of oysters, it results forbidding to collect oysters at the four stations of the El Chijol Channel ( Veracruz, Mexico), as well as along the channel itself, because the metal concentrations studied exceed these limits. In this case the application of Welch tests were not necessary. For the water hyacinth the means of the treatments were unequal in Fe, Cu, Ni, and Zn. This case is more illustrative, for the conclusion has been reached through the application of the Welch tests to treatments with heterogeneous variances. (Author)

  11. Time Consistent Strategies for Mean-Variance Asset-Liability Management Problems

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2013-01-01

    Full Text Available This paper studies the optimal time consistent investment strategies in multiperiod asset-liability management problems under mean-variance criterion. By applying time consistent model of Chen et al. (2013 and employing dynamic programming technique, we derive two-time consistent policies for asset-liability management problems in a market with and without a riskless asset, respectively. We show that the presence of liability does affect the optimal strategy. More specifically, liability leads a parallel shift of optimal time-consistent investment policy. Moreover, for an arbitrarily risk averse investor (under the variance criterion with liability, the time-diversification effects could be ignored in a market with a riskless asset; however, it should be considered in a market without any riskless asset.

  12. Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump

    International Nuclear Information System (INIS)

    Kharroubi, Idris; Lim, Thomas; Ngoupeyou, Armand

    2013-01-01

    In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory

  13. Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump

    Energy Technology Data Exchange (ETDEWEB)

    Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr [Université Paris Dauphine, CEREMADE, CNRS UMR 7534 (France); Lim, Thomas, E-mail: lim@ensiie.fr [Université d’Evry and ENSIIE, Laboratoire d’Analyse et Probabilités (France); Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr [Université Paris 7, Laboratoire de Probabilités et Modèles Aléatoires (France)

    2013-12-15

    In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.

  14. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    Science.gov (United States)

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  15. Risk-sensitivity and the mean-variance trade-off: decision making in sensorimotor control.

    Science.gov (United States)

    Nagengast, Arne J; Braun, Daniel A; Wolpert, Daniel M

    2011-08-07

    Numerous psychophysical studies suggest that the sensorimotor system chooses actions that optimize the average cost associated with a movement. Recently, however, violations of this hypothesis have been reported in line with economic theories of decision-making that not only consider the mean payoff, but are also sensitive to risk, that is the variability of the payoff. Here, we examine the hypothesis that risk-sensitivity in sensorimotor control arises as a mean-variance trade-off in movement costs. We designed a motor task in which participants could choose between a sure motor action that resulted in a fixed amount of effort and a risky motor action that resulted in a variable amount of effort that could be either lower or higher than the fixed effort. By changing the mean effort of the risky action while experimentally fixing its variance, we determined indifference points at which participants chose equiprobably between the sure, fixed amount of effort option and the risky, variable effort option. Depending on whether participants accepted a variable effort with a mean that was higher, lower or equal to the fixed effort, they could be classified as risk-seeking, risk-averse or risk-neutral. Most subjects were risk-sensitive in our task consistent with a mean-variance trade-off in effort, thereby, underlining the importance of risk-sensitivity in computational models of sensorimotor control.

  16. Smoothing-Norm Preconditioning for Regularizing Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg

    2006-01-01

    take into account a smoothing norm for the solution. This technique is well established for CGLS, but it does not immediately carry over to minimum-residual methods when the smoothing norm is a seminorm or a Sobolev norm. We develop a new technique which works for any smoothing norm of the form $\\|L...

  17. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  18. Variance stabilization for computing and comparing grand mean waveforms in MEG and EEG.

    Science.gov (United States)

    Matysiak, Artur; Kordecki, Wojciech; Sielużycki, Cezary; Zacharias, Norman; Heil, Peter; König, Reinhard

    2013-07-01

    Grand means of time-varying signals (waveforms) across subjects in magnetoencephalography (MEG) and electroencephalography (EEG) are commonly computed as arithmetic averages and compared between conditions, for example, by subtraction. However, the prerequisite for these operations, homogeneity of the variance of the waveforms in time, and for most common parametric statistical tests also between conditions, is rarely met. We suggest that the heteroscedasticity observed instead results because waveforms may differ by factors and additive terms and follow a mixed model. We propose to apply the asinh-transformation to stabilize the variance in such cases. We demonstrate the homogeneous variance and the normal distributions of data achieved by this transformation using simulated waveforms, and we apply it to real MEG data and show its benefits. The asinh-transformation is thus an essential and useful processing step prior to computing and comparing grand mean waveforms in MEG and EEG. Copyright © 2013 Society for Psychophysiological Research.

  19. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target.

    Science.gov (United States)

    Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M

    2014-01-20

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.

  20. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target

    International Nuclear Information System (INIS)

    Budiarto, E; Keijzer, M; Heemink, A W; Storchi, P R M; Breedveld, S; Heijmen, B J M

    2014-01-01

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements. (paper)

  1. A new media optimizer based on the mean-variance model

    Directory of Open Access Journals (Sweden)

    Pedro Jesus Fernandez

    2007-01-01

    Full Text Available In the financial markets, there is a well established portfolio optimization model called generalized mean-variance model (or generalized Markowitz model. This model considers that a typical investor, while expecting returns to be high, also expects returns to be as certain as possible. In this paper we introduce a new media optimization system based on the mean-variance model, a novel approach in media planning. After presenting the model in its full generality, we discuss possible advantages of the mean-variance paradigm, such as its flexibility in modeling the optimization problem, its ability of dealing with many media performance indices - satisfying most of the media plan needs - and, most important, the property of diversifying the media portfolios in a natural way, without the need to set up ad hoc constraints to enforce diversification.No mercado financeiro, existem modelos de otimização de portfólios já bem estabelecidos, denominados modelos de média-variância generalizados, ou modelos de Markowitz generalizados. Este modelo considera que um investidor típico, enquanto espera altos retornos, espera também que estes retornos sejam tão certos quanto possível. Neste artigo introduzimos um novo sistema otimizador de mídia baseado no modelo de média-variância, uma abordagem inovadora na área de planejamento de mídia. Após apresentar o modelo em sua máxima generalidade, discutimos possíveis vantagens do paradigma de média-variância, como sua flexibilidade na modelagem do problema de otimização, sua habilidade de lidar com vários índices de performance - satisfazendo a maioria dos requisitos de planejamento - e, o mais importante, a propriedade de diversificar os portfólios de mídia de uma forma natural, sem a necessidade de estipular restrições ad hoc para forçar a diversificação.

  2. A Method for Low-Delay Pitch Tracking and Smoothing

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . In the second step, a Kalman filter is used to smooth the estimates and separate the pitch into a slowly varying component and a rapidly varying component. The former represents the mean pitch while the latter represents vibrato, slides and other fast changes. The method is intended for use in applica- tions...... that require fast and sample-by-sample estimates, like tuners for musical instruments, transcription tasks requiring details like vi- brato, and real-time tracking of voiced speech....

  3. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    Science.gov (United States)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  4. A family-based joint test for mean and variance heterogeneity for quantitative traits.

    Science.gov (United States)

    Cao, Ying; Maxwell, Taylor J; Wei, Peng

    2015-01-01

    Traditional quantitative trait locus (QTL) analysis focuses on identifying loci associated with mean heterogeneity. Recent research has discovered loci associated with phenotype variance heterogeneity (vQTL), which is important in studying genetic association with complex traits, especially for identifying gene-gene and gene-environment interactions. While several tests have been proposed to detect vQTL for unrelated individuals, there are no tests for related individuals, commonly seen in family-based genetic studies. Here we introduce a likelihood ratio test (LRT) for identifying mean and variance heterogeneity simultaneously or for either effect alone, adjusting for covariates and family relatedness using a linear mixed effect model approach. The LRT test statistic for normally distributed quantitative traits approximately follows χ(2)-distributions. To correct for inflated Type I error for non-normally distributed quantitative traits, we propose a parametric bootstrap-based LRT that removes the best linear unbiased prediction (BLUP) of family random effect. Simulation studies show that our family-based test controls Type I error and has good power, while Type I error inflation is observed when family relatedness is ignored. We demonstrate the utility and efficiency gains of the proposed method using data from the Framingham Heart Study to detect loci associated with body mass index (BMI) variability. © 2014 John Wiley & Sons Ltd/University College London.

  5. Mean-Variance Portfolio Selection with a Fixed Flow of Investment in ...

    African Journals Online (AJOL)

    We consider a mean-variance portfolio selection problem for a fixed flow of investment in a continuous time framework. We consider a market structure that is characterized by a cash account, an indexed bond and a stock. We obtain the expected optimal terminal wealth for the investor. We also obtain a closed-form ...

  6. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    Science.gov (United States)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  7. The effect of sex on the mean and variance of fitness in facultatively sexual rotifers.

    Science.gov (United States)

    Becks, L; Agrawal, A F

    2011-03-01

    The evolution of sex is a classic problem in evolutionary biology. While this topic has been the focus of much theoretical work, there is a serious dearth of empirical data. A simple yet fundamental question is how sex affects the mean and variance in fitness. Despite its importance to the theory, this type of data is available for only a handful of taxa. Here, we report two experiments in which we measure the effect of sex on the mean and variance in fitness in the monogonont rotifer, Brachionus calyciflorus. Compared to asexually derived offspring, we find that sexual offspring have lower mean fitness and less genetic variance in fitness. These results indicate that, at least in the laboratory, there are both short- and long-term disadvantages associated with sexual reproduction. We briefly review the other available data and highlight the need for future work. © 2010 The Authors. Journal of Evolutionary Biology © 2010 European Society For Evolutionary Biology.

  8. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  9. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  10. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  11. Robust Markowitz mean-variance portfolio selection under ambiguous covariance matrix *

    OpenAIRE

    Ismail, Amine; Pham, Huyên

    2016-01-01

    This paper studies a robust continuous-time Markowitz portfolio selection pro\\-blem where the model uncertainty carries on the covariance matrix of multiple risky assets. This problem is formulated into a min-max mean-variance problem over a set of non-dominated probability measures that is solved by a McKean-Vlasov dynamic programming approach, which allows us to characterize the solution in terms of a Bellman-Isaacs equation in the Wasserstein space of probability measures. We provide expli...

  12. Regularization by fractional filter methods and data smoothing

    International Nuclear Information System (INIS)

    Klann, E; Ramlau, R

    2008-01-01

    This paper is concerned with the regularization of linear ill-posed problems by a combination of data smoothing and fractional filter methods. For the data smoothing, a wavelet shrinkage denoising is applied to the noisy data with known error level δ. For the reconstruction, an approximation to the solution of the operator equation is computed from the data estimate by fractional filter methods. These fractional methods are based on the classical Tikhonov and Landweber method, but avoid, at least partially, the well-known drawback of oversmoothing. Convergence rates as well as numerical examples are presented

  13. Identification of melanoma cells: a method based in mean variance of signatures via spectral densities.

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Angulo-Molina, Aracely

    2017-04-01

    In this paper a new methodology to detect and differentiate melanoma cells from normal cells through 1D-signatures averaged variances calculated with a binary mask is presented. The sample images were obtained from histological sections of mice melanoma tumor of 4 [Formula: see text] in thickness and contrasted with normal cells. The results show that melanoma cells present a well-defined range of averaged variances values obtained from the signatures in the four conditions used.

  14. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items.

    Science.gov (United States)

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.

  15. Portfolios dominating indices: Optimization with second-order stochastic dominance constraints vs. minimum and mean variance portfolios

    OpenAIRE

    Keçeci, Neslihan Fidan; Kuzmenko, Viktor; Uryasev, Stan

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  16. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    OpenAIRE

    Neslihan Fidan Keçeci; Viktor Kuzmenko; Stan Uryasev

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  17. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    Noack, K.

    1982-01-01

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  18. A New Feature Selection Algorithm Based on the Mean Impact Variance

    Directory of Open Access Journals (Sweden)

    Weidong Cheng

    2014-01-01

    Full Text Available The selection of fewer or more representative features from multidimensional features is important when the artificial neural network (ANN algorithm is used as a classifier. In this paper, a new feature selection method called the mean impact variance (MIVAR method is proposed to determine the feature that is more suitable for classification. Moreover, this method is constructed on the basis of the training process of the ANN algorithm. To verify the effectiveness of the proposed method, the MIVAR value is used to rank the multidimensional features of the bearing fault diagnosis. In detail, (1 70-dimensional all waveform features are extracted from a rolling bearing vibration signal with four different operating states, (2 the corresponding MIVAR values of all 70-dimensional features are calculated to rank all features, (3 14 groups of 10-dimensional features are separately generated according to the ranking results and the principal component analysis (PCA algorithm and a back propagation (BP network is constructed, and (4 the validity of the ranking result is proven by training this BP network with these seven groups of 10-dimensional features and by comparing the corresponding recognition rates. The results prove that the features with larger MIVAR value can lead to higher recognition rates.

  19. Linear–Quadratic Mean-Field-Type Games: A Direct Method

    Directory of Open Access Journals (Sweden)

    Tyrone E. Duncan

    2018-02-01

    Full Text Available In this work, a multi-person mean-field-type game is formulated and solved that is described by a linear jump-diffusion system of mean-field type and a quadratic cost functional involving the second moments, the square of the expected value of the state, and the control actions of all decision-makers. We propose a direct method to solve the game, team, and bargaining problems. This solution approach does not require solving the Bellman–Kolmogorov equations or backward–forward stochastic differential equations of Pontryagin’s type. The proposed method can be easily implemented by beginners and engineers who are new to the emerging field of mean-field-type game theory. The optimal strategies for decision-makers are shown to be in a state-and-mean-field feedback form. The optimal strategies are given explicitly as a sum of the well-known linear state-feedback strategy for the associated deterministic linear–quadratic game problem and a mean-field feedback term. The equilibrium cost of the decision-makers are explicitly derived using a simple direct method. Moreover, the equilibrium cost is a weighted sum of the initial variance and an integral of a weighted variance of the diffusion and the jump process. Finally, the method is used to compute global optimum strategies as well as saddle point strategies and Nash bargaining solution in state-and-mean-field feedback form.

  20. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  1. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  2. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  3. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  4. A Fourier transform method for the selection of a smoothing interval

    International Nuclear Information System (INIS)

    Kekre, H.B.; Madan, V.K.; Bairi, B.R.

    1989-01-01

    A novel method for the selection of a smoothing interval for the widely used Savitzky and Golay's smoothing filter is proposed. Complementary bandwidths for the nuclear spectral data and the smoothing filter are defined. The criterion for the selection of smoothing interval is based on matching the bandwidths of the spectral data to the filter. Using the above method five real observed spectral peaks of different full width at half maximum, viz. 23.5, 19.5, 17, 8.5 and 6.5 channels, were smoothed and the results are presented. (orig.)

  5. Multilevel covariance regression with correlated random effects in the mean and variance structure.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2017-09-01

    Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  7. Determinations of dose mean of specific energy for conventional x-rays by variance-measurements

    International Nuclear Information System (INIS)

    Forsberg, B.; Jensen, M.; Lindborg, L.; Samuelson, G.

    1978-05-01

    The dose mean value (zeta) of specific energy of a single event distribution is related to the variance of a multiple event distribution in a simple way. It is thus possible to determine zeta from measurements in high dose rates through observations of the variations in the ionization current from for instance an ionization chamber, if other parameters contribute negligibly to the total variance. With this method is has earlier been possible to obtain results down to about 10 nm in a beam of Co60-γ rays, which is one order of magnitude smaller than the sizes obtainable with the traditional technique. This advantage together with the suggestion that zeta could be an important parameter in radiobiology make further studies of the applications of the technique motivated. So far only data from measurements in beams of a radioactive nuclide has been reported. This paper contains results from measurements in a highly stabilized X-ray beam. The preliminary analysis shows that the variance technique has given reasonable results for object sizes in the region of 0.08 μm to 20 μm (100 kV, 1.6 Al, HVL 0.14 mm Cu). The results were obtained with a proportional counter except for the larger object sizes, where an ionization chamber was used. The measurements were performed at dose rates between 1 Gy/h and 40 Gy/h. (author)

  8. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  9. Age-dependent changes in mean and variance of gene expression across tissues in a twin cohort.

    Science.gov (United States)

    Viñuela, Ana; Brown, Andrew A; Buil, Alfonso; Tsai, Pei-Chien; Davies, Matthew N; Bell, Jordana T; Dermitzakis, Emmanouil T; Spector, Timothy D; Small, Kerrin S

    2018-02-15

    Changes in the mean and variance of gene expression with age have consequences for healthy aging and disease development. Age-dependent changes in phenotypic variance have been associated with a decline in regulatory functions leading to increase in disease risk. Here, we investigate age-related mean and variance changes in gene expression measured by RNA-seq of fat, skin, whole blood and derived lymphoblastoid cell lines (LCLs) expression from 855 adult female twins. We see evidence of up to 60% of age effects on transcription levels shared across tissues, and 47% of those on splicing. Using gene expression variance and discordance between genetically identical MZ twin pairs, we identify 137 genes with age-related changes in variance and 42 genes with age-related discordance between co-twins; implying the latter are driven by environmental effects. We identify four eQTLs whose effect on expression is age-dependent (FDR 5%). Combined, these results show a complicated mix of environmental and genetically driven changes in expression with age. Using the twin structure in our data, we show that additive genetic effects explain considerably more of the variance in gene expression than aging, but less that other environmental factors, potentially explaining why reliable expression-derived biomarkers for healthy-aging have proved elusive compared with those derived from methylation. © The Author(s) 2017. Published by Oxford University Press.

  10. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  11. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  12. An adaptive method for γ spectra smoothing

    International Nuclear Information System (INIS)

    Xiao Gang; Zhou Chunlin; Li Tiantuo; Han Feng; Di Yuming

    2001-01-01

    Adaptive wavelet method and multinomial fitting gliding method are used for smoothing γ spectra, respectively, and then FWHM of 1332 keV peak of 60 Co and activities of 238 U standard specimen are calculated. Calculated results show that adaptive wavelet method is better than the other

  13. The dynamics of integrate-and-fire: mean versus variance modulations and dependence on baseline parameters.

    Science.gov (United States)

    Pressley, Joanna; Troyer, Todd W

    2011-05-01

    The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.

  14. Comparison of Global Distributions of Zonal-Mean Gravity Wave Variance Inferred from Different Satellite Instruments

    Science.gov (United States)

    Preusse, Peter; Eckermann, Stephen D.; Offermann, Dirk; Jackman, Charles H. (Technical Monitor)

    2000-01-01

    Gravity wave temperature fluctuations acquired by the CRISTA instrument are compared to previous estimates of zonal-mean gravity wave temperature variance inferred from the LIMS, MLS and GPS/MET satellite instruments during northern winter. Careful attention is paid to the range of vertical wavelengths resolved by each instrument. Good agreement between CRISTA data and previously published results from LIMS, MLS and GPS/MET are found. Key latitudinal features in these variances are consistent with previous findings from ground-based measurements and some simple models. We conclude that all four satellite instruments provide reliable global data on zonal-mean gravity wave temperature fluctuations throughout the middle atmosphere.

  15. Relative variance of the mean-squared pressure in multimode media: rehabilitating former approaches.

    Science.gov (United States)

    Monsef, Florian; Cozza, Andrea; Rodrigues, Dominique; Cellard, Patrick; Durocher, Jean-Noel

    2014-11-01

    The commonly accepted model for the relative variance of transmission functions in room acoustics, derived by Weaver, aims at including the effects of correlation between eigenfrequencies. This model is based on an analytical expression of the relative variance derived by means of an approximated correlation function. The relevance of the approximation used for modeling such correlation is questioned here. Weaver's model was motivated by the fact that earlier models derived by Davy and Lyon assumed independent eigenfrequencies and led to an overestimation with respect to relative variances found in practice. It is shown here that this overestimation is due to an inadequate truncation of the modal expansion, and to an improper choice of the frequency range over which ensemble averages of the eigenfrequencies is defined. An alternative definition is proposed, settling the inconsistency; predicted relative variances are found to be in good agreement with experimental data. These results rehabilitate former approaches that were based on independence assumptions between eigenfrequencies. Some former studies showed that simpler correlation models could be used to predict the statistics of some field-related physical quantity at low modal overlap. The present work confirms that this is also the case when dealing with transmission functions.

  16. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  17. Saddlepoint approximations to the mean and variance of the extended hyper geometric distribution

    NARCIS (Netherlands)

    Eisinga, R.; Pelzer, B.

    2010-01-01

    Conditional inference on 2 x 2 tables with fixed margins and unequal probabilities is based on the extended hypergeometric distribution. If the support of the distribution is large, exact calculation of the conditional mean and variance of the table entry may be computationally demanding. This paper

  18. On robust multi-period pre-commitment and time-consistent mean-variance portfolio optimization

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2017-01-01

    textabstractWe consider robust pre-commitment and time-consistent mean-variance optimal asset allocation strategies, that are required to perform well also in a worst-case scenario regarding the development of the asset price. We show that worst-case scenarios for both strategies can be found by

  19. Allometric scaling of population variance with mean body size is predicted from Taylor's law and density-mass allometry.

    Science.gov (United States)

    Cohen, Joel E; Xu, Meng; Schuster, William S F

    2012-09-25

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.

  20. A Non-smooth Newton Method for Multibody Dynamics

    International Nuclear Information System (INIS)

    Erleben, K.; Ortiz, R.

    2008-01-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  1. A generalized Fellner-Schall method for smoothing parameter optimization with application to Tweedie location, scale and shape models.

    Science.gov (United States)

    Wood, Simon N; Fasiolo, Matteo

    2017-12-01

    We consider the optimization of smoothing parameters and variance components in models with a regular log likelihood subject to quadratic penalization of the model coefficients, via a generalization of the method of Fellner (1986) and Schall (1991). In particular: (i) we generalize the original method to the case of penalties that are linear in several smoothing parameters, thereby covering the important cases of tensor product and adaptive smoothers; (ii) we show why the method's steps increase the restricted marginal likelihood of the model, that it tends to converge faster than the EM algorithm, or obvious accelerations of this, and investigate its relation to Newton optimization; (iii) we generalize the method to any Fisher regular likelihood. The method represents a considerable simplification over existing methods of estimating smoothing parameters in the context of regular likelihoods, without sacrificing generality: for example, it is only necessary to compute with the same first and second derivatives of the log-likelihood required for coefficient estimation, and not with the third or fourth order derivatives required by alternative approaches. Examples are provided which would have been impossible or impractical with pre-existing Fellner-Schall methods, along with an example of a Tweedie location, scale and shape model which would be a challenge for alternative methods, and a sparse additive modeling example where the method facilitates computational efficiency gains of several orders of magnitude. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2017, The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  2. Asymmetries in conditional mean variance: modelling stock returns by asMA-asQGARCH

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.

    2004-01-01

    We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad

  3. NEpiC: a network-assisted algorithm for epigenetic studies using mean and variance combined signals.

    Science.gov (United States)

    Ruan, Peifeng; Shen, Jing; Santella, Regina M; Zhou, Shuigeng; Wang, Shuang

    2016-09-19

    DNA methylation plays an important role in many biological processes. Existing epigenome-wide association studies (EWAS) have successfully identified aberrantly methylated genes in many diseases and disorders with most studies focusing on analysing methylation sites one at a time. Incorporating prior biological information such as biological networks has been proven to be powerful in identifying disease-associated genes in both gene expression studies and genome-wide association studies (GWAS) but has been under studied in EWAS. Although recent studies have noticed that there are differences in methylation variation in different groups, only a few existing methods consider variance signals in DNA methylation studies. Here, we present a network-assisted algorithm, NEpiC, that combines both mean and variance signals in searching for differentially methylated sub-networks using the protein-protein interaction (PPI) network. In simulation studies, we demonstrate the power gain from using both the prior biological information and variance signals compared to using either of the two or neither information. Applications to several DNA methylation datasets from the Cancer Genome Atlas (TCGA) project and DNA methylation data on hepatocellular carcinoma (HCC) from the Columbia University Medical Center (CUMC) suggest that the proposed NEpiC algorithm identifies more cancer-related genes and generates better replication results. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. The problem of low variance voxels in statistical parametric mapping; a new hat avoids a 'haircut'.

    Science.gov (United States)

    Ridgway, Gerard R; Litvak, Vladimir; Flandin, Guillaume; Friston, Karl J; Penny, Will D

    2012-02-01

    Statistical parametric mapping (SPM) locates significant clusters based on a ratio of signal to noise (a 'contrast' of the parameters divided by its standard error) meaning that very low noise regions, for example outside the brain, can attain artefactually high statistical values. Similarly, the commonly applied preprocessing step of Gaussian spatial smoothing can shift the peak statistical significance away from the peak of the contrast and towards regions of lower variance. These problems have previously been identified in positron emission tomography (PET) (Reimold et al., 2006) and voxel-based morphometry (VBM) (Acosta-Cabronero et al., 2008), but can also appear in functional magnetic resonance imaging (fMRI) studies. Additionally, for source-reconstructed magneto- and electro-encephalography (M/EEG), the problems are particularly severe because sparsity-favouring priors constrain meaningfully large signal and variance to a small set of compactly supported regions within the brain. (Acosta-Cabronero et al., 2008) suggested adding noise to background voxels (the 'haircut'), effectively increasing their noise variance, but at the cost of contaminating neighbouring regions with the added noise once smoothed. Following theory and simulations, we propose to modify--directly and solely--the noise variance estimate, and investigate this solution on real imaging data from a range of modalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Mean-variance portfolio selection for defined-contribution pension funds with stochastic salary.

    Science.gov (United States)

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  6. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    OpenAIRE

    Chubing Zhang

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  7. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  8. Analysis of force variance for a continuous miner drum using the Design of Experiments method

    Energy Technology Data Exchange (ETDEWEB)

    S. Somanchi; V.J. Kecojevic; C.J. Bise [Pennsylvania State University, University Park, PA (United States)

    2006-06-15

    Continuous miners (CMs) are excavating machines designed to extract a variety of minerals by underground mining. The variance in force experienced by the cutting drum is a very important aspect that must be considered during drum design. A uniform variance essentially means that an equal load is applied on the individual cutting bits and this, in turn, enables better cutting action, greater efficiency, and longer bit and machine life. There are certain input parameters used in the drum design whose exact relationships with force variance are not clearly understood. This paper determines (1) the factors that have a significant effect on the force variance of the drum and (2) the values that can be assigned to these factors to minimize the force variance. A computer program, Continuous Miner Drum (CMD), was developed in collaboration with Kennametal, Inc. to facilitate the mechanical design of CM drums. CMD also facilitated data collection for determining significant factors affecting force variance. Six input parameters, including centre pitch, outer pitch, balance angle, shift angle, set angle and relative angle were tested at two levels. Trials were configured using the Design of Experiments (DoE) method where 2{sup 6} full-factorial experimental design was selected to investigate the effect of these factors on force variance. Results from the analysis show that all parameters except balance angle, as well as their interactions, significantly affect the force variance.

  9. Multi-Period Mean-Variance Portfolio Selection with Uncertain Time Horizon When Returns Are Serially Correlated

    Directory of Open Access Journals (Sweden)

    Ling Zhang

    2012-01-01

    Full Text Available We study a multi-period mean-variance portfolio selection problem with an uncertain time horizon and serial correlations. Firstly, we embed the nonseparable multi-period optimization problem into a separable quadratic optimization problem with uncertain exit time by employing the embedding technique of Li and Ng (2000. Then we convert the later into an optimization problem with deterministic exit time. Finally, using the dynamic programming approach, we explicitly derive the optimal strategy and the efficient frontier for the dynamic mean-variance optimization problem. A numerical example with AR(1 return process is also presented, which shows that both the uncertainty of exit time and the serial correlations of returns have significant impacts on the optimal strategy and the efficient frontier.

  10. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  11. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  12. Analysis of elastic-plastic problems using edge-based smoothed finite element method

    International Nuclear Information System (INIS)

    Cui, X.Y.; Liu, G.R.; Li, G.Y.; Zhang, G.Y.; Sun, G.Y.

    2009-01-01

    In this paper, an edge-based smoothed finite element method (ES-FEM) is formulated for stress field determination of elastic-plastic problems using triangular meshes, in which smoothing domains associated with the edges of the triangles are used for smoothing operations to improve the accuracy and the convergence rate of the method. The smoothed Galerkin weak form is adopted to obtain the discretized system equations, and the numerical integration becomes a simple summation over the edge-based smoothing domains. The pseudo-elastic method is employed for the determination of stress field and Hencky's total deformation theory is used to define effective elastic material parameters, which are treated as field variables and considered as functions of the final state of stress fields. The effective elastic material parameters are then obtained in an iterative manner based on the strain controlled projection method from the uniaxial material curve. Some numerical examples are investigated and excellent results have been obtained demonstrating the effectivity of the present method.

  13. Stable Control of Firing Rate Mean and Variance by Dual Homeostatic Mechanisms.

    Science.gov (United States)

    Cannon, Jonathan; Miller, Paul

    2017-12-01

    Homeostatic processes that provide negative feedback to regulate neuronal firing rates are essential for normal brain function. Indeed, multiple parameters of individual neurons, including the scale of afferent synapse strengths and the densities of specific ion channels, have been observed to change on homeostatic time scales to oppose the effects of chronic changes in synaptic input. This raises the question of whether these processes are controlled by a single slow feedback variable or multiple slow variables. A single homeostatic process providing negative feedback to a neuron's firing rate naturally maintains a stable homeostatic equilibrium with a characteristic mean firing rate; but the conditions under which multiple slow feedbacks produce a stable homeostatic equilibrium have not yet been explored. Here we study a highly general model of homeostatic firing rate control in which two slow variables provide negative feedback to drive a firing rate toward two different target rates. Using dynamical systems techniques, we show that such a control system can be used to stably maintain a neuron's characteristic firing rate mean and variance in the face of perturbations, and we derive conditions under which this happens. We also derive expressions that clarify the relationship between the homeostatic firing rate targets and the resulting stable firing rate mean and variance. We provide specific examples of neuronal systems that can be effectively regulated by dual homeostasis. One of these examples is a recurrent excitatory network, which a dual feedback system can robustly tune to serve as an integrator.

  14. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    Directory of Open Access Journals (Sweden)

    Chubing Zhang

    2014-01-01

    Full Text Available This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  15. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    Science.gov (United States)

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier. PMID:24782667

  16. Reduction of delayed-neutron contribution to variance-to-mean ratio by application of difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Mouri, Tomoaki; Ohtani, Nobuo

    1999-01-01

    The difference-filtering correlation analysis was applied to time-sequence neutron count data measured in a slightly subcritical assembly, where the Feynman-α analysis suffered from large contribution of delayed neutron to the variance-to-mean ratio of counts. The prompt-neutron decay constant inferred from the present filtering analysis agreed very closely with that by pulsed neutron experiment, and no dependence on the gate-time range specified could be observed. The 1st-order filtering was sufficient for the reduction of the delayed-neutron contribution. While the conventional method requires a choice of analysis formula appropriate to a gate-time range, the present method is applicable to a wide variety of gate-time ranges. (author)

  17. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    Science.gov (United States)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  18. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.

  19. On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility

    OpenAIRE

    Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini

    2008-01-01

    We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.

  20. Genomic selection of crossing partners on basis of the expected mean and variance of their derived lines.

    Science.gov (United States)

    Osthushenrich, Tanja; Frisch, Matthias; Herzog, Eva

    2017-01-01

    In a line or a hybrid breeding program superior lines are selected from a breeding pool as parental lines for the next breeding cycle. From a cross of two parental lines, new lines are derived by single-seed descent (SSD) or doubled haploid (DH) technology. However, not all possible crosses between the parental lines can be carried out due to limited resources. Our objectives were to present formulas to characterize a cross by the mean and variance of the genotypic values of the lines derived from the cross, and to apply the formulas to predict means and variances of flowering time traits in recombinant inbred line families of a publicly available data set in maize. We derived formulas which are based on the expected linkage disequilibrium (LD) between two loci and which can be used for arbitrary mating systems. Results were worked out for SSD and DH lines derived from a cross after an arbitrary number of intermating generations. The means and variances were highly correlated with results obtained by the simulation software PopVar. Compared with these simulations, computation time for our closed formulas was about ten times faster. The means and variances for flowering time traits observed in the recombinant inbred line families of the investigated data set showed correlations of around 0.9 for the means and of 0.46 and 0.65 for the standard deviations with the estimated values. We conclude that our results provide a framework that can be exploited to increase the efficiency of hybrid and line breeding programs by extending genomic selection approaches to the selection of crossing partners.

  1. Genomic selection of crossing partners on basis of the expected mean and variance of their derived lines

    Science.gov (United States)

    Osthushenrich, Tanja; Frisch, Matthias

    2017-01-01

    In a line or a hybrid breeding program superior lines are selected from a breeding pool as parental lines for the next breeding cycle. From a cross of two parental lines, new lines are derived by single-seed descent (SSD) or doubled haploid (DH) technology. However, not all possible crosses between the parental lines can be carried out due to limited resources. Our objectives were to present formulas to characterize a cross by the mean and variance of the genotypic values of the lines derived from the cross, and to apply the formulas to predict means and variances of flowering time traits in recombinant inbred line families of a publicly available data set in maize. We derived formulas which are based on the expected linkage disequilibrium (LD) between two loci and which can be used for arbitrary mating systems. Results were worked out for SSD and DH lines derived from a cross after an arbitrary number of intermating generations. The means and variances were highly correlated with results obtained by the simulation software PopVar. Compared with these simulations, computation time for our closed formulas was about ten times faster. The means and variances for flowering time traits observed in the recombinant inbred line families of the investigated data set showed correlations of around 0.9 for the means and of 0.46 and 0.65 for the standard deviations with the estimated values. We conclude that our results provide a framework that can be exploited to increase the efficiency of hybrid and line breeding programs by extending genomic selection approaches to the selection of crossing partners. PMID:29200436

  2. Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.

    Science.gov (United States)

    Dexter, Franklin; Ledolter, Johannes

    2003-07-01

    Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.

  3. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  4. Visualizing measurement for 3D smooth density distributions by means of linear programming

    International Nuclear Information System (INIS)

    Tayama, Norio; Yang, Xue-dong

    1994-01-01

    This paper is concerned with a theoretical possibility of a new visualizing measurement method based on an optimum 3D reconstruction from a few selected projections. A theory of optimum 3D reconstruction by a linear programming is discussed, utilizing a few projections for sampled 3D smooth-density-distribution model which satisfies the condition of the 3D sampling theorem. First by use of the sampling theorem, it is shown that we can set up simultaneous simple equations which corresponds to the case of the parallel beams. Then we solve the simultaneous simple equations by means of linear programming algorithm, and we can get an optimum 3D density distribution images with minimum error in the reconstruction. The results of computer simulation with the algorithm are presented. (author)

  5. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    Directory of Open Access Journals (Sweden)

    Zhu Xiao

    2016-05-01

    Full Text Available In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS, is proposed, which enables vehicle state estimation (VSE with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  6. The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination

    Directory of Open Access Journals (Sweden)

    Liangping Wu

    2014-08-01

    Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.

  7. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  8. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  9. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  10. Diffusion-Based Trajectory Observers with Variance Constraints

    DEFF Research Database (Denmark)

    Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo

    Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...

  11. k-Means has polynomial smoothed complexity

    NARCIS (Netherlands)

    Arthur, David; Manthey, Bodo; Röglin, Heiko; Spielman, D.A.

    2009-01-01

    The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means

  12. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    International Nuclear Information System (INIS)

    Hora, H.; Aydin, M.

    1992-01-01

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too

  13. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  14. Feynman variance-to-mean method

    International Nuclear Information System (INIS)

    Dowdy, E.J.; Hansen, G.E.; Robba, A.A.

    1985-01-01

    The Feynman and other fluctuation techniques have been shown to be useful for determining the multiplication of subcritical systems. The moments of the counting distribution from neutron detectors is analyzed to yield the multiplication value. The authors present the methodology and some selected applications and results and comparisons with Monte Carlo calculations

  15. A nonlinear wavelet method for data smoothing of low-level gamma-ray spectra

    International Nuclear Information System (INIS)

    Gang Xiao; Li Deng; Benai Zhang; Jianshi Zhu

    2004-01-01

    A nonlinear wavelet method was designed for smoothing low-level gamma-ray spectra. The spectra of a 60 Co graduated radioactive source and a mixed soil sample were smoothed respectively according to this method and a 5 point smoothing method. The FWHM of 1,332 keV peak of 60 Co source and the absolute activities of 238 U of soil sample were calculated. The results show that the nonlinear wavelet method is better than the traditional method, with less loss of spectral peak and a more complete reduction of statistical fluctuation. (author)

  16. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  17. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    Science.gov (United States)

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  18. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  19. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    Science.gov (United States)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  20. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    International Nuclear Information System (INIS)

    Zhang Guiyong; Liu Guirong

    2010-01-01

    In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

  1. Application of effective variance method for contamination monitor calibration

    International Nuclear Information System (INIS)

    Goncalez, O.L.; Freitas, I.S.M. de.

    1990-01-01

    In this report, the calibration of a thin window Geiger-Muller type monitor for alpha superficial contamination is presented. The calibration curve is obtained by the method of the least-squares fitting with effective variance. The method and the approach for the calculation are briefly discussed. (author)

  2. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  3. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  4. Smoothing of the Time Structure of Slowly Extracted Beam From Synchrotron by RF-Knock-out Method

    International Nuclear Information System (INIS)

    Voloshnyuk, A.V.; Bezshyjko, O.A.; Dolinskiy, A.V.; Dolinskij, A.V.

    2005-01-01

    Results of the study are presented in work on smoothing of the time structure of the bunch, slowly extracted from synchrotron. The numerical algorithm has been designed for study of the influence of the radio-frequency field of the resonator on time structure of the bunch. The numerical algorithm is based on method Monte-Carlo, where particles in the beam have been extracted by means of slow moving to the third-order resonance conditions. Characteristics of the time structure are vastly smoothed when synchrotron oscillations have been used as first experiments showed. Theoretical motivation of the reasons, influencing upon time structure of the slowly extracted beam is explained in given work

  5. Advanced methods of analysis variance on scenarios of nuclear prospective

    International Nuclear Information System (INIS)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-01-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  6. A method for smoothing segmented lung boundary in chest CT images

    Science.gov (United States)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  7. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  8. An evaluation of how downscaled climate data represents historical precipitation characteristics beyond the means and variances

    CSIR Research Space (South Africa)

    Kusangaya, S

    2016-09-01

    Full Text Available represented the underlying historical precipitation characteristics beyond the means and variances. Using the uMngeni Catchment in KwaZulu-Natal, South Africa as a case study, the occurrence of rainfall, rainfall threshold events and wet dry sequence...

  9. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  10. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  11. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  12. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  13. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  14. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  15. Capacity limitations to extract the mean emotion from multiple facial expressions depend on emotion variance.

    Science.gov (United States)

    Ji, Luyan; Pourtois, Gilles

    2018-04-20

    We examined the processing capacity and the role of emotion variance in ensemble representation for multiple facial expressions shown concurrently. A standard set size manipulation was used, whereby the sets consisted of 4, 8, or 16 morphed faces each uniquely varying along a happy-angry continuum (Experiment 1) or a neutral-happy/angry continuum (Experiments 2 & 3). Across the three experiments, we reduced the amount of emotion variance in the sets to explore the boundaries of this process. Participants judged the perceived average emotion from each set on a continuous scale. We computed and compared objective and subjective difference scores, using the morph units and post-experiment ratings, respectively. Results of the subjective scores were more consistent than the objective ones across the first two experiments where the variance was relatively large, and revealed each time that increasing set size led to a poorer averaging ability, suggesting capacity limitations in establishing ensemble representations for multiple facial expressions. However, when the emotion variance in the sets was reduced in Experiment 3, both subjective and objective scores remained unaffected by set size, suggesting that the emotion averaging process was unlimited in these conditions. Collectively, these results suggest that extracting mean emotion from a set composed of multiple faces depends on both structural (attentional) and stimulus-related effects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Isolating Trait and Method Variance in the Measurement of Callous and Unemotional Traits.

    Science.gov (United States)

    Paiva-Salisbury, Melissa L; Gill, Andrew D; Stickle, Timothy R

    2017-09-01

    To examine hypothesized influence of method variance from negatively keyed items in measurement of callous-unemotional (CU) traits, nine a priori confirmatory factor analysis model comparisons of the Inventory of Callous-Unemotional Traits were evaluated on multiple fit indices and theoretical coherence. Tested models included a unidimensional model, a three-factor model, a three-bifactor model, an item response theory-shortened model, two item-parceled models, and three correlated trait-correlated method minus one models (unidimensional, correlated three-factor, and bifactor). Data were self-reports of 234 adolescents (191 juvenile offenders, 43 high school students; 63% male; ages 11-17 years). Consistent with hypotheses, models accounting for method variance substantially improved fit to the data. Additionally, bifactor models with a general CU factor better fit the data compared with correlated factor models, suggesting a general CU factor is important to understanding the construct of CU traits. Future Inventory of Callous-Unemotional Traits analyses should account for method variance from item keying and response bias to isolate trait variance.

  17. Towards explaining the speed of k-means

    NARCIS (Netherlands)

    Manthey, Bodo; van de Pol, Jan Cornelis; Raamsdonk, F.; Stoelinga, Mariëlle Ida Antoinette

    2011-01-01

    The $k$-means method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worst-case running-time. To explain the speed of the $k$-means method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization

  18. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  19. Assessment of smoothed spectra using autocorrelation function

    International Nuclear Information System (INIS)

    Urbanski, P.; Kowalska, E.

    2006-01-01

    Recently, data and signal smoothing became almost standard procedures in the spectrometric and chromatographic methods. In radiometry, the main purpose to apply smoothing is minimisation of the statistical fluctuation and avoid distortion. The aim of the work was to find a qualitative parameter, which could be used, as a figure of merit for detecting distortion of the smoothed spectra, based on the linear model. It is assumed that as long as the part of the raw spectrum removed by the smoothing procedure (v s ) will be of random nature, the smoothed spectrum can be considered as undistorted. Thanks to this feature of the autocorrelation function, drifts of the mean value in the removed noise vs as well as its periodicity can be more easily detected from the autocorrelogram than from the original data

  20. Gradient approach to quantify the gradation smoothness for output media

    Science.gov (United States)

    Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun

    2010-01-01

    We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.

  1. Comparing transformation methods for DNA microarray data

    Directory of Open Access Journals (Sweden)

    Zwinderman Aeilko H

    2004-06-01

    Full Text Available Abstract Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects, and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.

  2. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    Science.gov (United States)

    Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  3. Estimating the mean and variance of measurements from serial radioactive decay schemes with emphasis on 222Rn and its short-lived progeny

    International Nuclear Information System (INIS)

    Inkret, W.C.; Borak, T.B.; Boes, D.C.

    1990-01-01

    Classically, the mean and variance of radioactivity measurements are estimated from poisson distributions. However, the random distribution of observed events is not poisson when the half-life is short compared with the interval of observation or when more than one event can be associated with a single initial atom. Procedures were developed to estimate the mean and variance of single measurements of serial radioactive processes. Results revealed that observations from the three consecutive alpha emissions beginning with 222 Rn are positively correlated. Since the poisson estimator ignores covariance terms, it underestimates the true variance of the measurement. The reverse is true for mixtures of radon daughters only. (author)

  4. Local smoothness for global optical flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    2012-01-01

    by this technique and work on local-global optical flow we propose a simple method for fusing optical flow estimates of different smoothness by evaluating interpolation quality locally by means of L1 block match on the corresponding set of gradient images. We illustrate the method in a setting where optical flows...

  5. A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy

    Science.gov (United States)

    Bennun, Leonardo

    2017-07-01

    A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied

  6. Application of Data Smoothing Method in Signal Processing for Vortex Flow Meters

    Directory of Open Access Journals (Sweden)

    Zhang Jun

    2017-01-01

    Full Text Available Vortex flow meter is typical flow measure equipment. Its measurement output signals can easily be impaired by environmental conditions. In order to obtain an improved estimate of the time-averaged velocity from the vortex flow meter, a signal filter method is applied in this paper. The method is based on a simple Savitzky-Golay smoothing filter algorithm. According with the algorithm, a numerical program is developed in Python with the scientific library numerical Numpy. Two sample data sets are processed through the program. The results demonstrate that the processed data is available accepted compared with the original data. The improved data of the time-averaged velocity is obtained within smoothing curves. Finally the simple data smoothing program is useable and stable for this filter.

  7. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  8. The Threat of Common Method Variance Bias to Theory Building

    Science.gov (United States)

    Reio, Thomas G., Jr.

    2010-01-01

    The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…

  9. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  10. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... to the larger malignant nuclei. Finally, the variance in the volume distribution of nuclear volume is estimated by shape-independent estimates of the volume-weighted second moment of the nuclear volume, vv2, using both a manual and a computer-assisted approach. The working procedure for the description of 3-D...

  11. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    Directory of Open Access Journals (Sweden)

    Jan Jurczyk

    Full Text Available The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  12. Mean-Variance stochastic goal programming for sustainable mutual funds' portfolio selection.

    Directory of Open Access Journals (Sweden)

    García-Bernabeu, Ana

    2015-11-01

    Full Text Available Mean-Variance Stochastic Goal Programming models (MV-SGP provide satisficing investment solutions in uncertain contexts. In this work, an MV-SGP model is proposed for portfolio selection which includes goals with regards to traditional and sustainable assets. The proposed approach is based on a two-step procedure. In the first step, sustainability and/or financial screens are applied to a set of assets (mutual funds previously evaluated with TOPSIS to determine the opportunity set. In a second step, satisficing portfolios of assets are obtained using a Goal Programming approach. Two different goals are considered. The first goal reflects only the purely financial side of the target while the second goal is referred to the sustainable side. Aversion to Risk Absolute (ARA coefficients are estimated and incorporated in our investment decision making approach using two different approaches.

  13. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  14. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  15. Simultaneous estimation of the in-mean and in-variance causal connectomes of the human brain.

    Science.gov (United States)

    Duggento, A; Passamonti, L; Guerrisi, M; Toschi, N

    2017-07-01

    In recent years, the study of the human connectome (i.e. of statistical relationships between non spatially contiguous neurophysiological events in the human brain) has been enormously fuelled by technological advances in high-field functional magnetic resonance imaging (fMRI) as well as by coordinated world wide data-collection efforts like the Human Connectome Project (HCP). In this context, Granger Causality (GC) approaches have recently been employed to incorporate information about the directionality of the influence exerted by a brain region on another. However, while fluctuations in the Blood Oxygenation Level Dependent (BOLD) signal at rest also contain important information about the physiological processes that underlie neurovascular coupling and associations between disjoint brain regions, so far all connectivity estimation frameworks have focused on central tendencies, hence completely disregarding so-called in-variance causality (i.e. the directed influence of the volatility of one signal on the volatility of another). In this paper, we develop a framework for simultaneous estimation of both in-mean and in-variance causality in complex networks. We validate our approach using synthetic data from complex ensembles of coupled nonlinear oscillators, and successively employ HCP data to provide the very first estimate of the in-variance connectome of the human brain.

  16. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    Science.gov (United States)

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  17. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  18. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    Science.gov (United States)

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  19. The boundaries of golden-mean Siegel disks in the complex quadratic H\\'enon family are not smooth

    OpenAIRE

    Yampolsky, Michael; Yang, Jonguk

    2016-01-01

    As was recently shown by the first author and others, golden-mean Siegel disks of sufficiently dissipative complex quadratic H\\'enon maps are bounded by topological circles. In this paper we investigate the geometric properties of such curves, and demonstrate that they cannot be $C^1$-smooth.

  20. Power and Sample Size Calculations for Testing Linear Combinations of Group Means under Variance Heterogeneity with Applications to Meta and Moderation Analyses

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2015-01-01

    The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…

  1. Power generation mixes evaluation applying the mean-variance theory. Analysis of the choices for Japanese energy policy

    International Nuclear Information System (INIS)

    Tabaru, Yasuhiko; Nonaka, Yuzuru; Nonaka, Shunsuke; Endou, Misao

    2013-01-01

    Optimal Japanese power generation mixes in 2030, for both economic efficiency and energy security (less cost variance risk), are evaluated by applying the mean-variance portfolio theory. Technical assumptions, including remaining generation capacity out of the present generation mix, future load duration curve, and Research and Development risks for some renewable energy technologies in 2030, are taken into consideration as either the constraints or parameters for the evaluation. Efficiency frontiers, which consist of the optimal generation mixes for several future scenarios, are identified, taking not only power balance but also capacity balance into account, and are compared with three power generation mixes submitted by the Japanese government as 'the choices for energy and environment'. (author)

  2. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  3. Modelling Conditional and Unconditional Heteroskedasticity with Smoothly Time-Varying Structure

    DEFF Research Database (Denmark)

    Amado, Christina; Teräsvirta, Timo

    multiplier type misspecification tests. Finite-sample properties of these procedures and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns illustrate the functioning and properties of our modelling strategy in practice......In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the conditional variance to have a smooth time-varying structure of either ad- ditive or multiplicative type. The suggested parameterizations describe both nonlinearity and structural change...... in the conditional and unconditional variances where the transition between regimes over time is smooth. A modelling strategy for these new time-varying parameter GARCH models is developed. It relies on a sequence of Lagrange multiplier tests, and the adequacy of the estimated models is investigated by Lagrange...

  4. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    Science.gov (United States)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  5. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Directory of Open Access Journals (Sweden)

    Zdravko Kačič

    2009-01-01

    Full Text Available This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE. The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  6. Online Speech/Music Segmentation Based on the Variance Mean of Filter Bank Energy

    Science.gov (United States)

    Kos, Marko; Grašič, Matej; Kačič, Zdravko

    2009-12-01

    This paper presents a novel feature for online speech/music segmentation based on the variance mean of filter bank energy (VMFBE). The idea that encouraged the feature's construction is energy variation in a narrow frequency sub-band. The energy varies more rapidly, and to a greater extent for speech than for music. Therefore, an energy variance in such a sub-band is greater for speech than for music. The radio broadcast database and the BNSI broadcast news database were used for feature discrimination and segmentation ability evaluation. The calculation procedure of the VMFBE feature has 4 out of 6 steps in common with the MFCC feature calculation procedure. Therefore, it is a very convenient speech/music discriminator for use in real-time automatic speech recognition systems based on MFCC features, because valuable processing time can be saved, and computation load is only slightly increased. Analysis of the feature's speech/music discriminative ability shows an average error rate below 10% for radio broadcast material and it outperforms other features used for comparison, by more than 8%. The proposed feature as a stand-alone speech/music discriminator in a segmentation system achieves an overall accuracy of over 94% on radio broadcast material.

  7. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  8. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz dan Mean Absolute Deviation

    Directory of Open Access Journals (Sweden)

    R. Agus Sartono

    2009-05-01

    Full Text Available Portfolio selection method which have been introduced by Harry Markowitz (1952 used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991 introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attempt to assess the VaR of two portfolios using delta normal method and historical simulation. We use the secondary data from the Jakarta Stock Exchange – LQ45 during 2003. We find that there is a weak-positive correlation between deviation standard and return in both portfolios. The VaR nolmal delta based on mean absolute deviation method eventually is higher than the VaR normal delta based on mean variance method. However, based on the historical simulation the VaR of two methods is statistically insignificant. Thus, the deviation standard is sufficient measures of portfolio risk.Keywords: optimalisasi portofolio, mean-variance, mean-absolute deviation, value-at-risk, metode delta normal, metode simulasi historis

  9. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  10. Development of phased mission analysis program with Monte Carlo method. Improvement of the variance reduction technique with biasing towards top event

    International Nuclear Information System (INIS)

    Yang Jinan; Mihara, Takatsugu

    1998-12-01

    This report presents a variance reduction technique to estimate the reliability and availability of highly complex systems during phased mission time using the Monte Carlo simulation. In this study, we introduced the variance reduction technique with a concept of distance between the present system state and the cut set configurations. Using this technique, it becomes possible to bias the transition from the operating states to the failed states of components towards the closest cut set. Therefore a component failure can drive the system towards a cut set configuration more effectively. JNC developed the PHAMMON (Phased Mission Analysis Program with Monte Carlo Method) code which involved the two kinds of variance reduction techniques: (1) forced transition, and (2) failure biasing. However, these techniques did not guarantee an effective reduction in variance. For further improvement, a variance reduction technique incorporating the distance concept was introduced to the PHAMMON code and the numerical calculation was carried out for the different design cases of decay heat removal system in a large fast breeder reactor. Our results indicate that the technique addition of this incorporating distance concept is an effective means of further reducing the variance. (author)

  11. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  12. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  13. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  14. Scaling of the mean and variance of population dynamics under fluctuating regimes.

    Science.gov (United States)

    Pertoldi, Cino; Faurby, S; Reed, D H; Knape, J; Björklund, M; Lundberg, P; Kaitala, V; Loeschcke, V; Bach, L A

    2014-12-01

    Theoretical ecologists have long sought to understand how the persistence of populations depends on the interactions between exogenous (biotic and abiotic) and endogenous (e.g., demographic and genetic) drivers of population dynamics. Recent work focuses on the autocorrelation structure of environmental perturbations and its effects on the persistence of populations. Accurate estimation of extinction times and especially determination of the mechanisms affecting extinction times is important for biodiversity conservation. Here we examine the interaction between environmental fluctuations and the scaling effect of the mean population size with its variance. We investigate how interactions between environmental and demographic stochasticity can affect the mean time to extinction, change optimal patch size dynamics, and how it can alter the often-assumed linear relationship between the census size and the effective population size. The importance of the correlation between environmental and demographic variation depends on the relative importance of the two types of variation. We found the correlation to be important when the two types of variation were approximately equal; however, the importance of the correlation diminishes as one source of variation dominates. The implications of these findings are discussed from a conservation and eco-evolutionary point of view.

  15. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  16. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    Science.gov (United States)

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  17. Going beyond the Mean: Using Variances to Enhance Understanding of the Impact of Educational Interventions for Multilevel Models

    Science.gov (United States)

    Peralta, Yadira; Moreno, Mario; Harwell, Michael; Guzey, S. Selcen; Moore, Tamara J.

    2018-01-01

    Variance heterogeneity is a common feature of educational data when treatment differences expressed through means are present, and often reflects a treatment by subject interaction with respect to an outcome variable. Identifying variables that account for this interaction can enhance understanding of whom a treatment does and does not benefit in…

  18. Optimization Stock Portfolio With Mean-Variance and Linear Programming: Case In Indonesia Stock Market

    Directory of Open Access Journals (Sweden)

    Yen Sun

    2010-05-01

    Full Text Available It is observed that the number of Indonesia’s domestic investor who involved in the stock exchange is very less compare to its total number of population (only about 0.1%. As a result, Indonesia Stock Exchange (IDX is highly affected by foreign investor that can threat the economy. Domestic investor tends to invest in risk-free asset such as deposit in the bank since they are not familiar yet with the stock market and anxious about the risk (risk-averse type of investor. Therefore, it is important to educate domestic investor to involve in the stock exchange. Investing in portfolio of stock is one of the best choices for risk-averse investor (such as Indonesia domestic investor since it offers lower risk for a given level of return. This paper studies the optimization of Indonesian stock portfolio. The data is the historical return of 10 stocks of LQ 45 for 5 time series (January 2004 – December 2008. It will be focus on selecting stocks into a portfolio, setting 10 of stock portfolios using mean variance method combining with the linear programming (solver. Furthermore, based on Efficient Frontier concept and Sharpe measurement, there will be one stock portfolio picked as an optimum Portfolio (Namely Portfolio G. Then, Performance of portfolio G will be evaluated by using Sharpe, Treynor and Jensen Measurement to show whether the return of Portfolio G exceeds the market return. This paper also illustrates how the stock composition of the Optimum Portfolio (G succeeds to predict the portfolio return in the future (5th January – 3rd April 2009. The result of the study observed that optimization portfolio using Mean-Variance (consistent with Markowitz theory combine with linear programming can be applied into Indonesia stock’s portfolio. All the measurements (Sharpe, Jensen, and Treynor show that the portfolio G is a superior portfolio. It is also been found that the composition (weights stocks of optimum portfolio (G can be used to

  19. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    Science.gov (United States)

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  20. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  1. σ-SCF: A direct energy-targeting method to mean-field excited states.

    Science.gov (United States)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy

    2017-12-07

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  2. σ-SCF: A direct energy-targeting method to mean-field excited states

    Science.gov (United States)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy

    2017-12-01

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  3. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Directory of Open Access Journals (Sweden)

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  4. Life history traits and exploitation affect the spatial mean-variance relationship in fish abundance.

    Science.gov (United States)

    Kuo, Ting-chun; Mandal, Sandip; Yamauchi, Atsushi; Hsieh, Chih-hao

    2016-05-01

    Fishing is expected to alter the spatial heterogeneity of fishes. As an effective index to quantify spatial heterogeneity, the exponent b in Taylor's power law (V = aMb) measures how spatial variance (V) varies with changes in mean abundance (M) of a population, with larger b indicating higher spatial aggregation potential (i.e., more heterogeneity). Theory predicts b is related with life history traits, but empirical evidence is lacking. Using 50-yr spatiotemporal data from the California Current Ecosystem, we examined fishing and life history effects on Taylor's exponent by comparing spatial distributions of exploited and unexploited fishes living in the same environment. We found that unexploited species with smaller size and generation time exhibit larger b, supporting theoretical prediction. In contrast, this relationship in exploited species is much weaker, as the exponents of large exploited species were higher than unexploited species with similar traits. Our results suggest that fishing may increase spatial aggregation potential of a species, likely through degrading their size/age structure. Results of moving-window cross-correlation analyses on b vs. age structure indices (mean age and age evenness) for some exploited species corroborate our findings. Furthermore, through linking our findings to other fundamental ecological patterns (occupancy-abundance and size-abundance relationships), we provide theoretical arguments for the usefulness of monitoring the exponent b for management purposes. We propose that age/size-truncated species might have lower recovery rate in spatial occupancy, and the spatial variance-mass relationship of a species might be non-linear. Our findings provide theoretical basis explaining why fishery management strategy should be concerned with changes to the age and spatial structure of exploited fishes.

  5. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    . Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...

  6. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz Dan Mean Absolute Deviation

    OpenAIRE

    Sartono, R. Agus; Setiawan, Arie Andika

    2006-01-01

    Portfolio selection method which have been introduced by Harry Markowitz (1952) used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991) introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR) is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attem...

  7. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  8. Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

    Directory of Open Access Journals (Sweden)

    David P. Griesheimer

    2017-09-01

    Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

  9. A numerical study of the Regge calculus and smooth lattice methods on a Kasner cosmology

    International Nuclear Information System (INIS)

    Brewin, Leo

    2015-01-01

    Two lattice based methods for numerical relativity, the Regge calculus and the smooth lattice relativity, will be compared with respect to accuracy and computational speed in a full 3+1 evolution of initial data representing a standard Kasner cosmology. It will be shown that both methods provide convergent approximations to the exact Kasner cosmology. It will also be shown that the Regge calculus is of the order of 110 times slower than the smooth lattice method. (paper)

  10. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  11. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  12. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  13. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  14. Variance to mean ratio, R(t), for poisson processes on phylogenetic trees.

    Science.gov (United States)

    Goldman, N

    1994-09-01

    The ratio of expected variance to mean, R(t), of numbers of DNA base substitutions for contemporary sequences related by a "star" phylogeny is widely seen as a measure of the adherence of the sequences' evolution to a Poisson process with a molecular clock, as predicted by the "neutral theory" of molecular evolution under certain conditions. A number of estimators of R(t) have been proposed, all predicted to have mean 1 and distributions based on the chi 2. Various genes have previously been analyzed and found to have values of R(t) far in excess of 1, calling into question important aspects of the neutral theory. In this paper, I use Monte Carlo simulation to show that the previously suggested means and distributions of estimators of R(t) are highly inaccurate. The analysis is applied to star phylogenies and to general phylogenetic trees, and well-known gene sequences are reanalyzed. For star phylogenies the results show that Kimura's estimators ("The Neutral Theory of Molecular Evolution," Cambridge Univ. Press, Cambridge, 1983) are unsatisfactory for statistical testing of R(t), but confirm the accuracy of Bulmer's correction factor (Genetics 123: 615-619, 1989). For all three nonstar phylogenies studied, attained values of all three estimators of R(t), although larger than 1, are within their true confidence limits under simple Poisson process models. This shows that lineage effects can be responsible for high estimates of R(t), restoring some limited confidence in the molecular clock and showing that the distinction between lineage and molecular clock effects is vital.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Investigation on filter method for smoothing spiral phase plate

    Science.gov (United States)

    Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian

    2018-03-01

    Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.

  16. Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging

    OpenAIRE

    Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...

  17. Fuel mix diversification incentives in liberalized electricity markets: A Mean-Variance Portfolio theory approach

    International Nuclear Information System (INIS)

    Roques, Fabien A.; Newbery, David M.; Nuttall, William J.

    2008-01-01

    Monte Carlo simulations of gas, coal and nuclear plant investment returns are used as inputs of a Mean-Variance Portfolio optimization to identify optimal base load generation portfolios for large electricity generators in liberalized electricity markets. We study the impact of fuel, electricity, and CO 2 price risks and their degree of correlation on optimal plant portfolios. High degrees of correlation between gas and electricity prices - as observed in most European markets - reduce gas plant risks and make portfolios dominated by gas plant more attractive. Long-term power purchase contracts and/or a lower cost of capital can rebalance optimal portfolios towards more diversified portfolios with larger shares of nuclear and coal plants

  18. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, and, Tinbergen Institute (Netherlands); Wong, Wing-Keung, E-mail: awong@hkbu.edu.h [Department of Economics, Hong Kong Baptist University (Hong Kong)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  19. Market efficiency of oil spot and futures. A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam (Netherlands); Wong, Wing-Keung [Department of Economics, Hong Kong Baptist University (China); Tinbergen Institute (Netherlands)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification. (author)

  20. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    International Nuclear Information System (INIS)

    Lean, Hooi Hooi; McAleer, Michael; Wong, Wing-Keung

    2010-01-01

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  1. Fuel mix diversification incentives in liberalized electricity markets: A Mean-Variance Portfolio theory approach

    Energy Technology Data Exchange (ETDEWEB)

    Roques, F.A.; Newbery, D.M.; Nuffall, W.J. [University of Cambridge, Cambridge (United Kingdom). Faculty of Economics

    2008-07-15

    Monte Carlo simulations of gas, coal and nuclear plant investment returns are used as inputs of a Mean-Variance Portfolio optimization to identify optimal base load generation portfolios for large electricity generators in liberalized electricity markets. We study the impact of fuel, electricity, and CO{sub 2} price risks and their degree of correlation on optimal plant portfolios. High degrees of correlation between gas and electricity prices - as observed in most European markets - reduce gas plant risks and make portfolios dominated by gas plant more attractive. Long-term power purchase contracts and/or a lower cost of capital can rebalance optimal portfolios towards more diversified portfolios with larger shares of nuclear and coal plants.

  2. Fast mean and variance computation of the diffuse sound transmission through finite-sized thick and layered wall and floor systems

    Science.gov (United States)

    Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.

    2018-05-01

    A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.

  3. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  4. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

  5. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  6. A Smooth Newton Method for Nonlinear Programming Problems with Inequality Constraints

    Directory of Open Access Journals (Sweden)

    Vasile Moraru

    2012-02-01

    Full Text Available The paper presents a reformulation of the Karush-Kuhn-Tucker (KKT system associated nonlinear programming problem into an equivalent system of smooth equations. Classical Newton method is applied to solve the system of equations. The superlinear convergence of the primal sequence, generated by proposed method, is proved. The preliminary numerical results with a problems test set are presented.

  7. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  8. International Diversification Versus Domestic Diversification: Mean-Variance Portfolio Optimization and Stochastic Dominance Approaches

    Directory of Open Access Journals (Sweden)

    Fathi Abid

    2014-05-01

    Full Text Available This paper applies the mean-variance portfolio optimization (PO approach and the stochastic dominance (SD test to examine preferences for international diversification versus domestic diversification from American investors’ viewpoints. Our PO results imply that the domestic diversification strategy dominates the international diversification strategy at a lower risk level and the reverse is true at a higher risk level. Our SD analysis shows that there is no arbitrage opportunity between international and domestic stock markets; domestically diversified portfolios with smaller risk dominate internationally diversified portfolios with larger risk and vice versa; and at the same risk level, there is no difference between the domestically and internationally diversified portfolios. Nonetheless, we cannot find any domestically diversified portfolios that stochastically dominate all internationally diversified portfolios, but we find some internationally diversified portfolios with small risk that dominate all the domestically diversified portfolios.

  9. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    variance in intensity, the topography was reconstructed entirely. This indicates that to a large extent interpolation was applied. To assess this amount of interpolation processing is done with imagery which is gradually downgraded. Through linking these products with the variance indicator (SNR) this results in a quantitative relation of the interpolation influence onto the topography estimation in respect to contrast. Our proposed method is capable of providing a clear indication of variance in reconstructions from UAV photogrammetry. This indicator has a practical advantage, as it can be implemented before the computational intensive matching phase. As such an acquired dataset can be tested in the field. If an area with too little contrast is identified, camera settings can be adjusted for a new flight, or additional measurements can be done through traditional means.

  10. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  11. Non-local means denoising of dynamic PET images.

    Directory of Open Access Journals (Sweden)

    Joyita Dutta

    Full Text Available Dynamic positron emission tomography (PET, which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM.NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch.To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches.The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high

  12. Non-local means denoising of dynamic PET images.

    Science.gov (United States)

    Dutta, Joyita; Leahy, Richard M; Li, Quanzheng

    2013-01-01

    Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while

  13. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  14. Using LMS Method in Smoothing Reference Centile Curves for Lipid Profile of Iranian Children and Adolescents: A CASPIAN Study

    Directory of Open Access Journals (Sweden)

    M Hoseini

    2012-05-01

    Full Text Available

    Background and Objectives: LMS is a general monitoring method for fitting smooth reference centile curves in medical sciences. They provide the distribution of a measurement as it changes according to some covariates like age or time. This method describes the distribution of changes by three parameters; Mean, Coefficient of variation and Cox-Box power (skewness. Applying maximum penalized likelihood and spline function, the three curves are estimated and fitted and optimum smoothness is expressed by three curves. This study was conducted to provide the percentiles of lipid profile of Iranian children and adolescents by LMS.

     

    Methods: Smoothed reference centile curves of four groups of lipids (triglycerides, total-LDL- and HDL-cholesterol were developed from the data of 4824 Iranian school students, aged 6-18 years, living in six cities (Tabriz, Rasht, Gorgan, Mashad, Yazd and Tehran-Firouzkouh in Iran. Demographic and laboratory data were taken from the national study of the surveillance and prevention of non-communicable diseases from childhood (CASPIAN Study. After data management, data of 4824 students were included in the statistical analysis, which was conducted by the modified LMS method proposed by Cole. The curves were developed with a degree of freedom of four to ten with some tools such as deviance, Q tests, and detrended Q-Q plot were used for monitoring goodness of fit models.

     

    Results: All tools confirmed the model, and the LMS method was used as an appropriate method in smoothing reference centile. This method revealed the distributing features of variables serving as an objective tool to determine their relative importance.

     

    Conclusion: This study showed that the triglycerides level is higher and

  15. The summation of the matrix elements of Hamiltonian and transition operators. The variance of the emission spectrum

    International Nuclear Information System (INIS)

    Karaziya, R.I.; Rudzikajte, L.S.

    1988-01-01

    The general method to obtain the explicit expressions for sums of the matrix elements of Hamiltonian and transition operators has been extended. It can be used for determining the main characteristics of atomic spectra, such as the mean energy, the variance, the asymmetry coefficient, etc., as well as for the average quantities which describe the configuration mixing. By mean of this method the formula for the variance of the emission spectrum has been derived. It has been shown that this quantity of the emission spectrum can be expressed by the variances of the energy spectra of the initial and final configurations and by additional terms, caused by the distribution of the intensity in spectrum

  16. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    Science.gov (United States)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  17. Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data

    Science.gov (United States)

    Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.

    The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.

  18. Research on industrialization of electric vehicles with its demand forecast using exponential smoothing method

    Directory of Open Access Journals (Sweden)

    Zhanglin Peng

    2015-04-01

    Full Text Available Purpose: Electric vehicles industry has gotten a rapid development in the world, especially in the developed countries, but still has a gap among different countries or regions. The advanced industrialization experiences of the EVs in the developed countries will have a great helpful for the development of EVs industrialization in the developing countries. This paper seeks to research the industrialization path & prospect of American EVs by forecasting electric vehicles demand and its proportion to the whole car sales based on the historical 37 EVs monthly sales and Cars monthly sales spanning from Dec. 2010 to Dec. 2013, and find out the key measurements to help Chinese government and automobile enterprises to promote Chinese EVs industrialization. Design/methodology: Compared with Single Exponential Smoothing method and Double Exponential Smoothing method, Triple exponential smoothing method is improved and applied in this study. Findings: The research results show that:  American EVs industry will keep a sustained growth in the next 3 months.  Price of the EVs, price of fossil oil, number of charging station, EVs technology and the government market & taxation polices have a different influence to EVs sales. So EVs manufacturers and policy-makers can adjust or reformulate some technology tactics and market measurements according to the forecast results. China can learn from American EVs polices and measurements to develop Chinese EVs industry. Originality/value: The main contribution of this paper is to use the triple exponential smoothing method to forecast the electric vehicles demand and its proportion to the whole automobile sales, and analyze the industrial development of Chinese electric vehicles by American EVs industry.

  19. A Mean-Variance Diagnosis of the Financial Crisis: International Diversification and Safe Havens

    Directory of Open Access Journals (Sweden)

    Alexander Eptas

    2010-12-01

    Full Text Available We use mean-variance analysis with short selling constraints to diagnose the effects of the recent global financial crisis by evaluating the potential benefits of international diversification in the search for ‘safe havens’. We use stock index data for a sample of developed, advanced-emerging and emerging countries. ‘Text-book’ results are obtained for the pre-crisis analysis with the optimal portfolio for any risk-averse investor being obtained as the tangency portfolio of the All-Country portfolio frontier. During the crisis there is a disjunction between bank lending and stock markets revealed by negative average returns and an absence of any empirical Capital Market Line. Israel and Colombia emerge as the safest havens for any investor during the crisis. For Israel this may reflect the protection afforded by special trade links and diaspora support, while for Colombia we speculate that this reveals the impact on world financial markets of the demand for cocaine.

  20. Bayesian Exponential Smoothing.

    OpenAIRE

    Forbes, C.S.; Snyder, R.D.; Shami, R.S.

    2000-01-01

    In this paper, a Bayesian version of the exponential smoothing method of forecasting is proposed. The approach is based on a state space model containing only a single source of error for each time interval. This model allows us to improve current practices surrounding exponential smoothing by providing both point predictions and measures of the uncertainty surrounding them.

  1. On the Computation of the RMSEA and CFI from the Mean-And-Variance Corrected Test Statistic with Nonnormal Data in SEM.

    Science.gov (United States)

    Savalei, Victoria

    2018-01-01

    A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.

  2. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  3. Increased Wear Resistance of Surfaces of Rotation Bearings Methods Strengthening-Smoothing Processing

    Directory of Open Access Journals (Sweden)

    A.A. Tkachuk

    2016-05-01

    Full Text Available Trends of modern engineering put forward higher requirements for quality bearings. This is especially true on production of bearings for special purposes with high speeds of rotation and resource. Much more opportunities in the technology management quality surface layers appear in the application of smoothing-strengthening methods, based on superficial plastic deformation. Working models of cutting lathes, grinders and tool smoothing sequence revealed the formation of operational parameters in the technological cycle of roller rings. The model of the dynamics of elastic deformation of the work piece tool helps identify actions radial force in the contact “surface – indenter.” Using mathematical modelling resolved a number of issues relevant process.

  4. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    Science.gov (United States)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  5. Assessment of finite element and smoothed particles hydrodynamics methods for modeling serrated chip formation in hardened steel

    Directory of Open Access Journals (Sweden)

    Usama Umer

    2016-05-01

    Full Text Available This study aims to perform comparative analyses in modeling serrated chip morphologies using traditional finite element and smoothed particles hydrodynamics methods. Although finite element models are being employed in predicting machining performance variables for the last two decades, many drawbacks and limitations exist with the current finite element models. The problems like excessive mesh distortions, high numerical cost of adaptive meshing techniques, and need of geometric chip separation criteria hinder its practical implementation in metal cutting industries. In this study, a mesh free method, namely, smoothed particles hydrodynamics, is implemented for modeling serrated chip morphology while machining AISI H13 hardened tool steel. The smoothed particles hydrodynamics models are compared with the traditional finite element models, and it has been found that the smoothed particles hydrodynamics models have good capabilities in handling large distortions and do not need any geometric or mesh-based chip separation criterion.

  6. Reactivity determination in accelerator driven nuclear reactors by statistics from neutron detectors (Feynman-Alpha Method)

    International Nuclear Information System (INIS)

    Ceder, M.

    2002-03-01

    The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods

  7. Reactivity determination in accelerator driven nuclear reactors by statistics from neutron detectors (Feynman-Alpha Method)

    Energy Technology Data Exchange (ETDEWEB)

    Ceder, M

    2002-03-01

    The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods

  8. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  9. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    International Nuclear Information System (INIS)

    Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.

    2015-01-01

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method

  10. The impact of grid and spectral nudging on the variance of the near-surface wind speed

    DEFF Research Database (Denmark)

    Vincent, Claire Louise; Hahmann, Andrea N.

    2015-01-01

    Grid and spectral nudging are effective ways of preventing drift from large scale weather patterns in regional climate models. However, the effect of nudging on the wind-speed variance is unclear. In this study, the impact of grid and spectral nudging on near-surface and upper boundary layer wind...... nudging at and above 1150 m above ground level (AGL). Nested 5 km simulations are not nudged directly, but inherit boundary conditions from the 15 km experiments. Spatial and temporal spectra show that grid nudging causes smoothing of the wind in the 15 km domain at all wavenumbers, both at 1150 m AGL...... and near the surface where nudging is not applied directly, while spectral nudging mainly affects longer wavenumbers. Maps of mesoscale variance show spatial smoothing for both grid and spectral nudging, although the effect is less pronounced for spectral nudging. On the inner, 5 km domain, an indirect...

  11. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    Science.gov (United States)

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  12. Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Malvagi, F.

    2012-01-01

    The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solution is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)

  13. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  14. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  15. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  16. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  17. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  18. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  19. Subspace K-means clustering.

    Science.gov (United States)

    Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-12-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).

  20. Increased rhythmicity in hypertensive arterial smooth muscle is linked to transient receptor potential canonical channels

    DEFF Research Database (Denmark)

    Chen, Xiaoping; Yang, Dachun; Ma, Shuangtao

    2010-01-01

    Vasomotion describes oscillations of arterial vascular tone due to synchronized changes of intracellular calcium concentrations. Since increased calcium influx into vascular smooth muscle cells from spontaneously hypertensive rats (SHR) has been associated with variances of transient receptor pot...

  1. Optical coherence tomography to evaluate variance in the extent of carious lesions in depth.

    Science.gov (United States)

    Park, Kyung-Jin; Schneider, Hartmut; Ziebolz, Dirk; Krause, Felix; Haak, Rainer

    2018-05-03

    Evaluation of variance in the extent of carious lesions in depth at smooth surfaces within the same ICDAS code group using optical coherence tomography (OCT) in vitro and in vivo. (1) Verification/validation of OCT to assess non-cavitated caries: 13 human molars with ICDAS code 2 at smooth surfaces were imaged using OCT and light microscopy. Regions of interest (ROI) were categorized according to the depth of carious lesions. Agreement between histology and OCT was determined by unweighted Cohen's Kappa and Wilcoxon test. (2) Assessment of 133 smooth surfaces using ICDAS and OCT in vitro, 49 surfaces in vivo. ROI were categorized according to the caries extent (ICDAS: codes 0-4, OCT: scoring based on lesion depth). A frequency distribution of the OCT scores for each ICDAS code was determined. (1) Histology and OCT agreed moderately (κ = 0.54, p ≤ 0.001) with no significant difference between both methods (p = 0.25). The lesions (76.9% (10 of 13)) _were equally scored. (2) In vitro, OCT revealed caries in 42% of ROI clinically assessed as sound. OCT detected dentin-caries in 40% of ROIs visually assessed as enamel-caries. In vivo, large differences between ICDAS and OCT were observed. Carious lesions of ICDAS codes 1 and 2 vary largely in their extent in depth.

  2. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  3. Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Dumonteil, E.; Malvagi, F. [Commissariat a l' Energie Atomique et Aux Energies Alternatives, CEA SACLAY DEN, Laboratoire de Transport Stochastique et Deterministe, 91191 Gif-sur-Yvette (France)

    2012-07-01

    The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solution is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)

  4. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  5. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    Science.gov (United States)

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  6. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    International Nuclear Information System (INIS)

    Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.

    2007-01-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool

  7. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  8. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  9. Mean-Variance Portfolio Selection Problem with Stochastic Salary for a Defined Contribution Pension Scheme: A Stochastic Linear-Quadratic-Exponential Framework

    Directory of Open Access Journals (Sweden)

    Charles Nkeki

    2013-11-01

    Full Text Available This paper examines a mean-variance portfolio selection problem with stochastic salary and inflation protection strategy in the accumulation phase of a defined contribution (DC pension plan. The utility function is assumed to be quadratic. It was assumed that the flow of contributions made by the PPM are invested into a market that is characterized by a cash account, an inflation-linked bond and a stock. In this paper, inflationlinked bond is traded and used to hedge inflation risks associated with the investment. The aim of this paper is to maximize the expected final wealth and minimize its variance. Efficient frontier for the three classes of assets (under quadratic utility function that will enable pension plan members (PPMs to decide their own wealth and risk in their investment profile at retirement was obtained.

  10. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  11. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  12. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  13. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  14. Control Strategies for Smoothing of Output Power of Wind Energy Conversion Systems

    Science.gov (United States)

    Pratap, Alok; Urasaki, Naomitsu; Senju, Tomonobu

    2013-10-01

    This article presents a control method for output power smoothing of a wind energy conversion system (WECS) with a permanent magnet synchronous generator (PMSG) using the inertia of wind turbine and the pitch control. The WECS used in this article adopts an AC-DC-AC converter system. The generator-side converter controls the torque of the PMSG, while the grid-side inverter controls the DC-link and grid voltages. For the generator-side converter, the torque command is determined by using the fuzzy logic. The inputs of the fuzzy logic are the operating point of the rotational speed of the PMSG and the difference between the wind turbine torque and the generator torque. By means of the proposed method, the generator torque is smoothed, and the kinetic energy stored by the inertia of the wind turbine can be utilized to smooth the output power fluctuations of the PMSG. In addition, the wind turbines shaft stress is mitigated compared to a conventional maximum power point tracking control. Effectiveness of the proposed method is verified by the numerical simulations.

  15. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  16. $h - p$ Spectral element methods for elliptic problems on non-smooth domains using parallel computers

    NARCIS (Netherlands)

    Tomar, S.K.

    2002-01-01

    It is well known that elliptic problems when posed on non-smooth domains, develop singularities. We examine such problems within the framework of spectral element methods and resolve the singularities with exponential accuracy.

  17. Increasing the genetic variance of rice protein through mutation breeding techniques

    International Nuclear Information System (INIS)

    Ismachin, M.

    1975-01-01

    Recommended rice variety in Indonesia, Pelita I/1 was treated with gamma rays at the doses of 20 krad, 30 krad, and 40 krad. The seeds were also treated with EMS 1%. In M 2 generation, the protein content of seeds from the visible mutants and from the normal looking plants were analyzed by DBC method. No significant increase in the genetic variance was found on the samples treated with 20 krad gamma, and on the normal looking plants treated by EMS 1%. The mean value of the treated samples were mostly significant decrease compared with the mean value of the protein distribution in untreated samples (control). Since significant increase in genetic variance was also found in M 2 normal looking plants - treated with gamma at the doses of 30 krad and 40 krad -selection of protein among these materials could be more valuable. (author)

  18. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    Science.gov (United States)

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  19. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    Science.gov (United States)

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  20. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  1. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  2. A method of piecewise-smooth numerical branching

    Czech Academy of Sciences Publication Activity Database

    Ligurský, Tomáš; Renard, Y.

    2017-01-01

    Roč. 97, č. 7 (2017), s. 815-827 ISSN 1521-4001 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : numerical branching * piecewise smooth * steady-state problem * contact problem * Coulomb friction Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://onlinelibrary.wiley.com/doi/10.1002/zamm.201600219/epdf

  3. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  4. Comment on "Relative variance of the mean squared pressure in multimode media: rehabilitating former approaches" [J. Acoust. Soc. Am. 136, 2621-2629 (2014)].

    Science.gov (United States)

    Davy, John L; Weaver, Richard L

    2015-03-01

    Models for the statistics of responses in finite reverberant structures, and in particular, for the variance of the mean square pressure in reverberation rooms, have been studied for decades. It is therefore surprising that a recent communication has claimed that the literature has gotten the simplest of such calculations very wrong. Monsef, Cozza, Rodrigues, Cellard, and Durocher [(2014). J. Acoust. Soc. Am. 136, 2621-2629] have derived a modal-based expression for the relative variance that differs significantly from expressions that have been accepted since 1969. This Comment points out that the Monsef formula is clearly incorrect, and then for the interested reader, points out the subtle place where they made their mistake.

  5. MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method

    International Nuclear Information System (INIS)

    Chen, Z; Qi, H; Wu, S; Xu, Y; Zhou, L

    2016-01-01

    Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotational invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74

  6. Analysis of degree of nonlinearity and stochastic nature of HRV signal during meditation using delay vector variance method.

    Science.gov (United States)

    Reddy, L Ram Gopal; Kuntamalla, Srinivas

    2011-01-01

    Heart rate variability analysis is fast gaining acceptance as a potential non-invasive means of autonomic nervous system assessment in research as well as clinical domains. In this study, a new nonlinear analysis method is used to detect the degree of nonlinearity and stochastic nature of heart rate variability signals during two forms of meditation (Chi and Kundalini). The data obtained from an online and widely used public database (i.e., MIT/BIH physionet database), is used in this study. The method used is the delay vector variance (DVV) method, which is a unified method for detecting the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. From the results it is clear that there is a significant change in the nonlinearity and stochastic nature of the signal before and during the meditation (p value > 0.01). During Chi meditation there is a increase in stochastic nature and decrease in nonlinear nature of the signal. There is a significant decrease in the degree of nonlinearity and stochastic nature during Kundalini meditation.

  7. Second-order numerical methods for multi-term fractional differential equations: Smooth and non-smooth solutions

    Science.gov (United States)

    Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em

    2017-12-01

    Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.

  8. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

    International Nuclear Information System (INIS)

    Ezzati, A.O.; Sohrabpour, M.

    2013-01-01

    In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000. -- Highlights: ► The efficiency of APR and APRS methods was compared to two tallying methods. ► The APRS is more efficient than the APR method in track length estimator tallies. ► In the energy deposition tally, both methods have nearly the same efficiency. ► Variance reduction factors of these methods are position and energy dependent.

  9. Smooth halos in the cosmic web

    Energy Technology Data Exchange (ETDEWEB)

    Gaite, José, E-mail: jose.gaite@upm.es [Physics Dept., ETSIAE, IDR, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, E-28040 Madrid (Spain)

    2015-04-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.

  10. Smooth halos in the cosmic web

    International Nuclear Information System (INIS)

    Gaite, José

    2015-01-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness

  11. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  12. Smoothed Particle Inference: A Kilo-Parametric Method for X-ray Galaxy Cluster Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, John R.; Marshall, P.J.; /KIPAC, Menlo Park; Andersson, K.; /Stockholm U. /SLAC

    2005-08-05

    We propose an ambitious new method that models the intracluster medium in clusters of galaxies as a set of X-ray emitting smoothed particles of plasma. Each smoothed particle is described by a handful of parameters including temperature, location, size, and elemental abundances. Hundreds to thousands of these particles are used to construct a model cluster of galaxies, with the appropriate complexity estimated from the data quality. This model is then compared iteratively with X-ray data in the form of adaptively binned photon lists via a two-sample likelihood statistic and iterated via Markov Chain Monte Carlo. The complex cluster model is propagated through the X-ray instrument response using direct sampling Monte Carlo methods. Using this approach the method can reproduce many of the features observed in the X-ray emission in a less assumption-dependent way that traditional analyses, and it allows for a more detailed characterization of the density, temperature, and metal abundance structure of clusters. Multi-instrument X-ray analyses and simultaneous X-ray, Sunyaev-Zeldovich (SZ), and lensing analyses are a straight-forward extension of this methodology. Significant challenges still exist in understanding the degeneracy in these models and the statistical noise induced by the complexity of the models.

  13. Window least squares method applied to statistical noise smoothing of positron annihilation data

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Barbiellini, B.; Hoffmann, L.; Manuel, A.A.; Peter, M.

    1993-06-01

    The paper deals with the off-line processing of experimental data obtained by two-dimensional angular correlation of the electron-positron annihilation radiation (2D-ACAR) technique on high-temperature superconductors. A piecewise continuous window least squares (WLS) method devoted to the statistical noise smoothing of 2D-ACAR data, under close control of the crystal reciprocal lattice periodicity, is derived. Reliability evaluation of the constant local weight WLS smoothing formula (CW-WLSF) shows that consistent processing 2D-ACAR data by CW-WLSF is possible. CW-WLSF analysis of 2D-ACAR data collected on untwinned Y Ba 2 Cu 3 O 7-δ single crystals yields significantly improved signature of the Fermi surface ridge at second Umklapp processes and resolves, for the first time, the ridge signature at third Umklapp processes. (author). 24 refs, 9 figs

  14. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  15. A novel MPPT method for enhancing energy conversion efficiency taking power smoothing into account

    International Nuclear Information System (INIS)

    Liu, Jizhen; Meng, Hongmin; Hu, Yang; Lin, Zhongwei; Wang, Wei

    2015-01-01

    Highlights: • We discuss the disadvantages of conventional OTC MPPT method. • We study the relationship between enhancing efficiency and power smoothing. • The conversion efficiency is enhanced and the volatility of power is suppressed. • Small signal analysis is used to verify the effectiveness of proposed method. - Abstract: With the increasing capacity of wind energy conversion system (WECS), the rotational inertia of wind turbine is becoming larger. And the efficiency of energy conversion is significantly reduced by the large inertia. This paper proposes a novel maximum power point tracking (MPPT) method to enhance the efficiency of energy conversion for large-scale wind turbine. Since improving the efficiency may increase the fluctuations of output power, power smoothing is considered as the second control objective. A T-S fuzzy inference system (FIS) is adapted to reduce the fluctuations according to the volatility of wind speed and accelerated rotor speed by regulating the compensation gain. To verify the effectiveness, stability and good dynamic performance of the new method, mechanism analyses, small signal analyses, and simulation studies are carried out based on doubly-fed induction generator (DFIG) wind turbine, respectively. Study results show that both the response speed and the efficiency of proposed method are increased. In addition, the extra fluctuations of output power caused by the high efficiency are reduced effectively by the proposed method with FIS

  16. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  17. The Requirement of a Positive Definite Covariance Matrix of Security Returns for Mean-Variance Portfolio Analysis: A Pedagogic Illustration

    Directory of Open Access Journals (Sweden)

    Clarence C. Y. Kwan

    2010-07-01

    Full Text Available This study considers, from a pedagogic perspective, a crucial requirement for the covariance matrix of security returns in mean-variance portfolio analysis. Although the requirement that the covariance matrix be positive definite is fundamental in modern finance, it has not received any attention in standard investment textbooks. Being unaware of the requirement could cause confusion for students over some strange portfolio results that are based on seemingly reasonable input parameters. This study considers the requirement both informally and analytically. Electronic spreadsheet tools for constrained optimization and basic matrix operations are utilized to illustrate the various concepts involved.

  18. Mean-Reverting Portfolio With Budget Constraint

    Science.gov (United States)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  19. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  20. TAX SMOOTHING: TESTS ON INDONESIAN DATA

    Directory of Open Access Journals (Sweden)

    Rudi Kurniawan

    2011-01-01

    Full Text Available This paper contributes to the literature of public debt management by testing for tax smoothing behaviour in Indonesia. Tax smoothing means that the government smooths the tax rate across all future time periods to minimize the distortionary costs of taxation over time for a given path of government spending. In a stochastic economy with an incomplete bond market, tax smoothing implies that the tax rate approximates a random walk and changes in the tax rate are nearly unpredictable. For that purpose, two tests were performed. First, random walk behaviour of the tax rate was examined by undertaking unit root tests. The null hypothesis of unit root cannot be rejected, indicating that the tax rate is nonstationary and, hence, it follows a random walk. Second, the predictability of the tax rate was examined by regressing changes in the tax rate on its own lagged values and also on lagged values of changes in the goverment expenditure ratio, and growth of real output. They are found to be not significant in predicting changes in the tax rate. Taken together, the present evidence seems to be consistent with the tax smoothing, therefore provides support to this theory.

  1. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  2. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  3. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  4. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  5. Smoothed Analysis of Local Search Algorithms

    NARCIS (Netherlands)

    Manthey, Bodo; Dehne, Frank; Sack, Jörg-Rüdiger; Stege, Ulrike

    2015-01-01

    Smoothed analysis is a method for analyzing the performance of algorithms for which classical worst-case analysis fails to explain the performance observed in practice. Smoothed analysis has been applied to explain the performance of a variety of algorithms in the last years. One particular class of

  6. Surface smoothness

    DEFF Research Database (Denmark)

    Tummala, Sudhakar; Dam, Erik B.

    2010-01-01

    accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used....... We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers....

  7. A systematic method of smooth switching LPV controllers design for a morphing aircraft

    Directory of Open Access Journals (Sweden)

    Jiang Weilai

    2015-12-01

    Full Text Available This paper is concerned with a systematic method of smooth switching linear parameter-varying (LPV controllers design for a morphing aircraft with a variable wing sweep angle. The morphing aircraft is modeled as an LPV system, whose scheduling parameter is the variation rate of the wing sweep angle. By dividing the scheduling parameter set into subsets with overlaps, output feedback controllers which consider smooth switching are designed and the controllers in overlapped subsets are interpolated from two adjacent subsets. A switching law without constraint on the average dwell time is obtained which makes the conclusion less conservative. Furthermore, a systematic algorithm is developed to improve the efficiency of the controllers design process. The parameter set is divided into the fewest subsets on the premise that the closed-loop system has a desired performance. Simulation results demonstrate the effectiveness of this approach.

  8. Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem

    Directory of Open Access Journals (Sweden)

    V. Charles

    2011-01-01

    Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.

  9. Analysis of rhythmic variance - ANORVA. A new simple method for detecting rhythms in biological time series

    Directory of Open Access Journals (Sweden)

    Peter Celec

    2004-01-01

    Full Text Available Cyclic variations of variables are ubiquitous in biomedical science. A number of methods for detecting rhythms have been developed, but they are often difficult to interpret. A simple procedure for detecting cyclic variations in biological time series and quantification of their probability is presented here. Analysis of rhythmic variance (ANORVA is based on the premise that the variance in groups of data from rhythmic variables is low when a time distance of one period exists between the data entries. A detailed stepwise calculation is presented including data entry and preparation, variance calculating, and difference testing. An example for the application of the procedure is provided, and a real dataset of the number of papers published per day in January 2003 using selected keywords is compared to randomized datasets. Randomized datasets show no cyclic variations. The number of papers published daily, however, shows a clear and significant (p<0.03 circaseptan (period of 7 days rhythm, probably of social origin

  10. Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments

    Science.gov (United States)

    Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.

    1973-01-01

    A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.

  11. Raman spectroscopy denoising based on smoothing filter combined with EEMD algorithm

    Science.gov (United States)

    Tian, Dayong; Lv, Xiaoyi; Mo, Jiaqing; Chen, Chen

    2018-02-01

    In the extraction of Raman spectra, the signal will be affected by a variety of background noises, and then the effective information of Raman spectra is weakened or even submerged in noises, so the spectral analysis and denoising processing is very important. The traditional ensemble empirical mode decomposition (EEMD) method is to remove the noises by removing the IMF components that mainly contain the noises. However, it will lose some details of the Raman signal. For the problem of EEMD algorithm, the denoising method of smoothing filter combined with EEMD is proposed in this paper. First, EEMD is used to decompose the Raman noise signal into several IMF components. Then, the components mainly containing noises are selected using the self-correlation function, and the smoothing filter is used to remove the noises of the components. Finally, the sum of the denoised components is added with the remaining components to obtain the final denoised signal. The experimental results show that compared with the traditional denoising algorithm, the signal-to-noise ratio (SNR), the root mean square error (RMSE) and the correlation coefficient are significantly improved by using the proposed smoothing filter combined with EEMD.

  12. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  13. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  14. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  15. Face-based smoothed finite element method for real-time simulation of soft tissue

    Science.gov (United States)

    Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane

    2017-03-01

    In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.

  16. Modeling the dispersion effects of contractile fibers in smooth muscles

    Science.gov (United States)

    Murtada, Sae-Il; Kroon, Martin; Holzapfel, Gerhard A.

    2010-12-01

    Micro-structurally based models for smooth muscle contraction are crucial for a better understanding of pathological conditions such as atherosclerosis, incontinence and asthma. It is meaningful that models consider the underlying mechanical structure and the biochemical activation. Hence, a simple mechanochemical model is proposed that includes the dispersion of the orientation of smooth muscle myofilaments and that is capable to capture available experimental data on smooth muscle contraction. This allows a refined study of the effects of myofilament dispersion on the smooth muscle contraction. A classical biochemical model is used to describe the cross-bridge interactions with the thin filament in smooth muscles in which calcium-dependent myosin phosphorylation is the only regulatory mechanism. A novel mechanical model considers the dispersion of the contractile fiber orientations in smooth muscle cells by means of a strain-energy function in terms of one dispersion parameter. All model parameters have a biophysical meaning and may be estimated through comparisons with experimental data. The contraction of the middle layer of a carotid artery is studied numerically. Using a tube the relationships between the internal pressure and the stretches are investigated as functions of the dispersion parameter, which implies a strong influence of the orientation of smooth muscle myofilaments on the contraction response. It is straightforward to implement this model in a finite element code to better analyze more complex boundary-value problems.

  17. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  18. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  19. Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues

    International Nuclear Information System (INIS)

    Yang, M; Zhu, X R; Mohan, R; Dong, L; Virshup, G; Clayton, J

    2010-01-01

    We discovered an empirical relationship between the logarithm of mean excitation energy (ln I m ) and the effective atomic number (EAN) of human tissues, which allows for computing patient-specific proton stopping power ratios (SPRs) using dual-energy CT (DECT) imaging. The accuracy of the DECT method was evaluated for 'standard' human tissues as well as their variance. The DECT method was compared to the existing standard clinical practice-a procedure introduced by Schneider et al at the Paul Scherrer Institute (the stoichiometric calibration method). In this simulation study, SPRs were derived from calculated CT numbers of known material compositions, rather than from measurement. For standard human tissues, both methods achieved good accuracy with the root-mean-square (RMS) error well below 1%. For human tissues with small perturbations from standard human tissue compositions, the DECT method was shown to be less sensitive than the stoichiometric calibration method. The RMS error remained below 1% for most cases using the DECT method, which implies that the DECT method might be more suitable for measuring patient-specific tissue compositions to improve the accuracy of treatment planning for charged particle therapy. In this study, the effects of CT imaging artifacts due to the beam hardening effect, scatter, noise, patient movement, etc were not analyzed. The true potential of the DECT method achieved in theoretical conditions may not be fully achievable in clinical settings. Further research and development may be needed to take advantage of the DECT method to characterize individual human tissues.

  20. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  1. Asymmetries in conditional mean and variance: Modelling stock returns by asMA-asQGARCH

    NARCIS (Netherlands)

    Brännäs, K.; de Gooijer, J.G.

    2000-01-01

    The asymmetric moving average model (asMA) is extended to allow for asymmetric quadratic conditional heteroskedasticity (asQGARCH). The asymmetric parametrization of the condi- tional variance encompasses the quadratic GARCH model of Sentana (1995). We introduce a framework for testing asymmetries

  2. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  3. Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.

    Science.gov (United States)

    Vera, J Fernando; Macías, Rodrigo

    2017-06-01

    One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.

  4. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  5. Noise variance analysis using a flat panel x-ray detector: A method for additive noise assessment with application to breast CT applications

    Energy Technology Data Exchange (ETDEWEB)

    Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M. [Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States); Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States) and Department of Biomedical Engineering, University of California, Davis, Davis, California, 95616 (United States)

    2010-07-15

    Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.

  6. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    Science.gov (United States)

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  7. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  8. Modeling, analysis and comparison of TSR and OTC methods for MPPT and power smoothing in permanent magnet synchronous generator-based wind turbines

    International Nuclear Information System (INIS)

    Nasiri, M.; Milimonfared, J.; Fathi, S.H.

    2014-01-01

    Highlights: • Small signal modeling of PMSG wind turbine with two controllers are introduced. • Poles and zeroes analyzing of OTC and TSR methods is performed. • Generator output power with varying wind speed in PMSG wind turbine is studied. • MPPT capability of OTC and TSR methods to wind speed variations are compared. • Power smoothing capability and reducing mechanical stress of both methods are studied. - Abstract: This paper presents a small signal modeling of a direct-driven permanent magnet synchronous generator (PMSG) based on wind turbine which is connected to the grid via back-to-back converters. The proposed small signal model includes two maximum power point tracking (MPPT) controllers: tip speed ratio (TSR) control and optimal torque control (OTC). These methods are analytically compared to illustrate MPPT and power smoothing capability. Then, to compare the MPPT and power smoothing operation of the mentioned methods, simulations are performed in MATLAB/Simulink software. From the simulation results, OTC is highly efficient in power smoothing enhancement and has clearly good performance to extract maximum power from wind; however, TSR control has definitely fast responses to wind speed variations with the expense of higher fluctuations due to its non-minimum phase characteristic

  9. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  10. Bayesian evaluation of constrained hypotheses on variances of multiple independent groups

    NARCIS (Netherlands)

    Böing-Messing, F.; van Assen, M.A.L.M.; Hofman, A.D.; Hoijtink, H.; Mulder, J.

    2017-01-01

    Research has shown that independent groups often differ not only in their means, but also in their variances. Comparing and testing variances is therefore of crucial importance to understand the effect of a grouping variable on an outcome variable. Researchers may have specific expectations

  11. Effect of sequence variants on variance in glucose levels predicts type 2 diabetes risk and accounts for heritability.

    Science.gov (United States)

    Ivarsdottir, Erna V; Steinthorsdottir, Valgerdur; Daneshpour, Maryam S; Thorleifsson, Gudmar; Sulem, Patrick; Holm, Hilma; Sigurdsson, Snaevar; Hreidarsson, Astradur B; Sigurdsson, Gunnar; Bjarnason, Ragnar; Thorsson, Arni V; Benediktsson, Rafn; Eyjolfsson, Gudmundur; Sigurdardottir, Olof; Olafsson, Isleifur; Zeinali, Sirous; Azizi, Fereidoun; Thorsteinsdottir, Unnur; Gudbjartsson, Daniel F; Stefansson, Kari

    2017-09-01

    Sequence variants that affect mean fasting glucose levels do not necessarily affect risk for type 2 diabetes (T2D). We assessed the effects of 36 reported glucose-associated sequence variants on between- and within-subject variance in fasting glucose levels in 69,142 Icelanders. The variant in TCF7L2 that increases fasting glucose levels increases between-subject variance (5.7% per allele, P = 4.2 × 10 -10 ), whereas variants in GCK and G6PC2 that increase fasting glucose levels decrease between-subject variance (7.5% per allele, P = 4.9 × 10 -11 and 7.3% per allele, P = 7.5 × 10 -18 , respectively). Variants that increase mean and between-subject variance in fasting glucose levels tend to increase T2D risk, whereas those that increase the mean but reduce variance do not (r 2 = 0.61). The variants that increase between-subject variance increase fasting glucose heritability estimates. Intuitively, our results show that increasing the mean and variance of glucose levels is more likely to cause pathologically high glucose levels than increase in the mean offset by a decrease in variance.

  12. Beyond mean allelic effects: A locus at the major color gene MC1R associates also with differing levels of phenotypic and genetic (co)variance for coloration in barn owls.

    Science.gov (United States)

    San-Jose, Luis M; Ducret, Valérie; Ducrest, Anne-Lyse; Simon, Céline; Roulin, Alexandre

    2017-10-01

    The mean phenotypic effects of a discovered variant help to predict major aspects of the evolution and inheritance of a phenotype. However, differences in the phenotypic variance associated to distinct genotypes are often overlooked despite being suggestive of processes that largely influence phenotypic evolution, such as interactions between the genotypes with the environment or the genetic background. We present empirical evidence for a mutation at the melanocortin-1-receptor gene, a major vertebrate coloration gene, affecting phenotypic variance in the barn owl, Tyto alba. The white MC1R allele, which associates with whiter plumage coloration, also associates with a pronounced phenotypic and additive genetic variance for distinct color traits. Contrarily, the rufous allele, associated with a rufous coloration, relates to a lower phenotypic and additive genetic variance, suggesting that this allele may be epistatic over other color loci. Variance differences between genotypes entailed differences in the strength of phenotypic and genetic associations between color traits, suggesting that differences in variance also alter the level of integration between traits. This study highlights that addressing variance differences of genotypes in wild populations provides interesting new insights into the evolutionary mechanisms and the genetic architecture underlying the phenotype. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  13. Adsorption on smooth electrodes: A radiotracer study

    International Nuclear Information System (INIS)

    Rice-Jackson, L.M.

    1990-01-01

    Adsorption on solids is a complicated process and in most cases, occurs as the early stage of other more complicated processes, i.e. chemical reactions, electrooxidation, electroreduction. The research reported here combines the electroanalytical method, cyclic voltammetry, and the use of radio-labeled isotopes, soft beta emitters, to study adsorption processes at smooth electrodes. The in-situ radiotracer method is highly anion (molecule) specific and provides information on the structure and composition of the electric double layer. The emphasis of this research was on studying adsorption processes at smooth electrodes of copper, gold, and platinum. The application of the radiotracer method to these smooth surfaces have led to direct in-situ measurements from which surface coverage was determined; anions and molecules were identified; and weak interactions of adsorbates with the surface of the electrodes were readily monitored. 179 refs

  14. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  15. A modified compressible smoothed particle hydrodynamics method and its application on the numerical simulation of low and high velocity impacts

    International Nuclear Information System (INIS)

    Amanifard, N.; Haghighat Namini, V.

    2012-01-01

    In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.

  16. Automatic smoothing parameter selection in GAMLSS with an application to centile estimation.

    Science.gov (United States)

    Rigby, Robert A; Stasinopoulos, Dimitrios M

    2014-08-01

    A method for automatic selection of the smoothing parameters in a generalised additive model for location, scale and shape (GAMLSS) model is introduced. The method uses a P-spline representation of the smoothing terms to express them as random effect terms with an internal (or local) maximum likelihood estimation on the predictor scale of each distribution parameter to estimate its smoothing parameters. This provides a fast method for estimating multiple smoothing parameters. The method is applied to centile estimation where all four parameters of a distribution for the response variable are modelled as smooth functions of a transformed explanatory variable x This allows smooth modelling of the location, scale, skewness and kurtosis parameters of the response variable distribution as functions of x. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  17. An approach for spherical harmonic analysis of non-smooth data

    Science.gov (United States)

    Wang, Hansheng; Wu, Patrick; Wang, Zhiyong

    2006-12-01

    A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.

  18. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey

    2014-01-06

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  19. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey; Bierig, Claudio

    2014-01-01

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  20. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  1. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  2. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  3. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  4. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  5. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  6. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  7. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  8. Non-parametric smoothing of experimental data

    International Nuclear Information System (INIS)

    Kuketayev, A.T.; Pen'kov, F.M.

    2007-01-01

    Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving

  9. Smooth time-dependent receiver operating characteristic curve estimators.

    Science.gov (United States)

    Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos

    2018-03-01

    The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.

  10. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  11. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  12. Adaptive Multilevel Methods with Local Smoothing for $H^1$- and $H^{\\mathrm{curl}}$-Conforming High Order Finite Element Methods

    KAUST Repository

    Janssen, Bärbel

    2011-01-01

    A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method\\'s convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.

  13. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  14. I-F starting method with smooth transition to EMF based motion-sensorless vector control of PM synchronous motor/generator

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Teodorescu, Remus; Fatu, M.

    2008-01-01

    This paper proposes a novel hybrid motion- sensorless control system for permanent magnet synchronous motors (PMSM) using a new robust start-up method called I-f control, and a smooth transition to emf-based vector control. The I-f method is based on separate control of id, iq currents with the r......This paper proposes a novel hybrid motion- sensorless control system for permanent magnet synchronous motors (PMSM) using a new robust start-up method called I-f control, and a smooth transition to emf-based vector control. The I-f method is based on separate control of id, iq currents......-adaptive compensator to eliminate dc-offset and phase-delay. Digital simulations for PMSM start-up with full load torque are presented for different initial rotor-positions. The transitions from I-f to emf motion-sensorless vector control and back as well, at very low-speeds, are fully validated by experimental...

  15. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  16. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  17. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  18. Coupling of smooth particle hydrodynamics with the finite element method

    International Nuclear Information System (INIS)

    Attaway, S.W.; Heinstein, M.W.; Swegle, J.W.

    1994-01-01

    A gridless technique called smooth particle hydrodynamics (SPH) has been coupled with the transient dynamics finite element code ppercase[pronto]. In this paper, a new weighted residual derivation for the SPH method will be presented, and the methods used to embed SPH within ppercase[pronto] will be outlined. Example SPH ppercase[pronto] calculations will also be presented. One major difficulty associated with the Lagrangian finite element method is modeling materials with no shear strength; for example, gases, fluids and explosive biproducts. Typically, these materials can be modeled for only a short time with a Lagrangian finite element code. Large distortions cause tangling of the mesh, which will eventually lead to numerical difficulties, such as negative element area or ''bow tie'' elements. Remeshing will allow the problem to continue for a short while, but the large distortions can prevent a complete analysis. SPH is a gridless Lagrangian technique. Requiring no mesh, SPH has the potential to model material fracture, large shear flows and penetration. SPH computes the strain rate and the stress divergence based on the nearest neighbors of a particle, which are determined using an efficient particle-sorting technique. Embedding the SPH method within ppercase[pronto] allows part of the problem to be modeled with quadrilateral finite elements, while other parts are modeled with the gridless SPH method. SPH elements are coupled to the quadrilateral elements through a contact-like algorithm. ((orig.))

  19. Improved analysis of all-sky meteor radar measurements of gravity wave variances and momentum fluxes

    Directory of Open Access Journals (Sweden)

    V. F. Andrioli

    2013-05-01

    Full Text Available The advantages of using a composite day analysis for all-sky interferometric meteor radars when measuring mean winds and tides are widely known. On the other hand, problems arise if this technique is applied to Hocking's (2005 gravity wave analysis for all-sky meteor radars. In this paper we describe how a simple change in the procedure makes it possible to use a composite day in Hocking's analysis. Also, we explain how a modified composite day can be constructed to test its ability to measure gravity wave momentum fluxes. Test results for specified mean, tidal, and gravity wave fields, including tidal amplitudes and gravity wave momentum fluxes varying strongly with altitude and/or time, suggest that the modified composite day allows characterization of monthly mean profiles of the gravity wave momentum fluxes, with good accuracy at least at the altitudes where the meteor counts are large (from 89 to 92.5 km. In the present work we also show that the variances measured with Hocking's method are often contaminated by the tidal fields and suggest a method of empirical correction derived from a simple simulation model. The results presented here greatly increase our confidence because they show that our technique is able to remove the tide-induced false variances from Hocking's analysis.

  20. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    Science.gov (United States)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  1. Direct numerical simulation of open channel flow over smooth-to-rough and rough-to-smooth step changes

    Science.gov (United States)

    Rouhi, Amirreza; Chung, Daniel; Hutchins, Nicholas

    2017-11-01

    Direct numerical simulations (DNSs) are reported for open channel flow over streamwise-alternating patches of smooth and fully rough walls. Owing to the streamwise periodicity, the flow configuration is composed of a step change from smooth to rough, and a step change from rough to smooth. The friction Reynolds number varies from 443 over the smooth patch to 715 over the rough patch. The flow is thoroughly studied by mean and fluctuation profiles, and spectrograms. The detailed flow from DNS reveals discrepancies of up to 50% among the various definitions of the internal-layer thickness, with apparent power-law exponents differing by up to 60%. The definition based on the logarithmic slope of the velocity profile, as proposed by Chamorro et al. (Boundary-Layer Meteorol., vol. 130, 2009, pp. 29-41), is most consistent with the physical notion of the internal layer; this is supported by the defect similarity based on this internal-layer thickness, and the streamwise homogeneity of the dissipation length-scale within this internal layer. The statistics inside this internal-layer, and the growth of the internal layer itself, are minimally affected by the streamwise periodicity when the patch length is at least six times the channel height.

  2. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  3. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  4. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  5. Smoothing optimization of supporting quadratic surfaces with Zernike polynomials

    Science.gov (United States)

    Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu

    2018-03-01

    A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.

  6. Estimation of the Mean of a Univariate Normal Distribution When the Variance is not Known

    NARCIS (Netherlands)

    Danilov, D.L.; Magnus, J.R.

    2002-01-01

    We consider the problem of estimating the first k coeffcients in a regression equation with k + 1 variables.For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002).We investigate properties of this estimator in

  7. Estimation of the mean of a univariate normal distribution when the variance is not known

    NARCIS (Netherlands)

    Danilov, Dmitri

    2005-01-01

    We consider the problem of estimating the first k coefficients in a regression equation with k+1 variables. For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002). We generalize this estimator to the case

  8. Estimation of nonlinearities from pseudodynamic and dynamic responses of bridge structures using the Delay Vector Variance method

    Science.gov (United States)

    Jaksic, Vesna; Mandic, Danilo P.; Karoumi, Raid; Basu, Bidroha; Pakrashi, Vikram

    2016-01-01

    Analysis of the variability in the responses of large structural systems and quantification of their linearity or nonlinearity as a potential non-invasive means of structural system assessment from output-only condition remains a challenging problem. In this study, the Delay Vector Variance (DVV) method is used for full scale testing of both pseudo-dynamic and dynamic responses of two bridges, in order to study the degree of nonlinearity of their measured response signals. The DVV detects the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. The pseudo-dynamic data is obtained from a concrete bridge during repair while the dynamic data is obtained from a steel railway bridge traversed by a train. We show that DVV is promising as a marker in establishing the degree to which a change in the signal nonlinearity reflects the change in the real behaviour of a structure. It is also useful in establishing the sensitivity of instruments or sensors deployed to monitor such changes.

  9. Methods and energy storage devices utilizing electrolytes having surface-smoothing additives

    Science.gov (United States)

    Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei

    2015-11-12

    Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.

  10. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation.

    Directory of Open Access Journals (Sweden)

    Wei Liu

    Full Text Available Depth image-based rendering (DIBR, which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method.

  11. Adaptive Multilevel Methods with Local Smoothing for $H^1$- and $H^{\\mathrm{curl}}$-Conforming High Order Finite Element Methods

    KAUST Repository

    Janssen, Bä rbel; Kanschat, Guido

    2011-01-01

    A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method's convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.

  12. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  13. A comparison of 3-D computed tomography versus 2-D radiography measurements of ulnar variance and ulnolunate distance during forearm rotation.

    Science.gov (United States)

    Kawanishi, Y; Moritomo, H; Omori, S; Kataoka, T; Murase, T; Sugamoto, K

    2014-06-01

    Positive ulnar variance is associated with ulnar impaction syndrome and ulnar variance is reported to increase with pronation. However, radiographic measurement can be affected markedly by the incident angle of the X-ray beam. We performed three-dimensional (3-D) computed tomography measurements of ulnar variance and ulnolunate distance during forearm rotation and compared these with plain radiographic measurements in 15 healthy wrists. From supination to pronation, ulnar variance increased in all cases on the radiographs; mean ulnar variance increased significantly and mean ulnolunate distance decreased significantly. However on 3-D imaging, ulna variance decreased in 12 cases on moving into pronation and increased in three cases; neither the mean ulnar variance nor mean ulnolunate distance changed significantly. Our results suggest that the forearm position in which ulnar variance increased varies among individuals. This may explain why some patients with ulnar impaction syndrome complain of wrist pain exacerbated by forearm supination. It also suggests that standard radiographic assessments of ulnar variance are unreliable. © The Author(s) 2013.

  14. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  15. A Smoothed Finite Element-Based Elasticity Model for Soft Bodies

    Directory of Open Access Journals (Sweden)

    Juan Zhang

    2017-01-01

    Full Text Available One of the major challenges in mesh-based deformation simulation in computer graphics is to deal with mesh distortion. In this paper, we present a novel mesh-insensitive and softer method for simulating deformable solid bodies under the assumptions of linear elastic mechanics. A face-based strain smoothing method is adopted to alleviate mesh distortion instead of the traditional spatial adaptive smoothing method. Then, we propose a way to combine the strain smoothing method and the corotational method. With this approach, the amplitude and frequency of transient displacements are slightly affected by the distorted mesh. Realistic simulation results are generated under large rotation using a linear elasticity model without adding significant complexity or computational cost to the standard corotational FEM. Meanwhile, softening effect is a by-product of our method.

  16. Return Smoothing Mechanisms in Life and Pension Insurance

    DEFF Research Database (Denmark)

    Montserrat, Guillén; Jørgensen, Peter Løchte; Nielsen, Jens Perch

    2006-01-01

    pricing theory. We explore the properties of this pension scheme in detail and find that in terms of market value, smoothing is an illusion, but also that the return smoothing mechanism implies a dynamic asset allocation strategy which corresponds with traditional pension saving advice......Traditional with-profits pension saving schemes have been criticized for their opacity, plagued by embedded options and guarantees, and have recently created enormous problems for the solvency of the life insurance and pension industry. This has fueled creativity in the industry's product...... development departments, and this paper analyzes a representative member of a family of new pension schemes that have been introduced in the new millennium to alleviate these problems. The complete transparency of the new scheme's smoothing mechanism means that it can be analyzed using contingent claims...

  17. Determination of a reference value and its uncertainty through a power-moderated mean

    International Nuclear Information System (INIS)

    Pomme, S.; Keightley, J.

    2015-01-01

    A method is presented for calculating a key comparison reference value (KCRV) and its associated standard uncertainty. The method allows for technical scrutiny of data, correction or exclusion of extreme data, but above all uses a power-moderated mean that can calculate an efficient and robust mean from any data set. For mutually consistent data, the method approaches a weighted mean, the weights being the reciprocals of the variances (squared standard uncertainties) associated with the measured values. For data sets suspected of inconsistency, the weighting is moderated by increasing the laboratory variances by a common amount and/or decreasing the power of the weighting factors. By using computer simulations, it is shown that the PMM is a good compromise between efficiency and robustness, while also providing a realistic uncertainty. The method is of particular interest to data evaluators and organizers of proficiency tests. (authors)

  18. Smooth manifolds

    CERN Document Server

    Sinha, Rajnikant

    2014-01-01

    This book offers an introduction to the theory of smooth manifolds, helping students to familiarize themselves with the tools they will need for mathematical research on smooth manifolds and differential geometry. The book primarily focuses on topics concerning differential manifolds, tangent spaces, multivariable differential calculus, topological properties of smooth manifolds, embedded submanifolds, Sard’s theorem and Whitney embedding theorem. It is clearly structured, amply illustrated and includes solved examples for all concepts discussed. Several difficult theorems have been broken into many lemmas and notes (equivalent to sub-lemmas) to enhance the readability of the book. Further, once a concept has been introduced, it reoccurs throughout the book to ensure comprehension. Rank theorem, a vital aspect of smooth manifolds theory, occurs in many manifestations, including rank theorem for Euclidean space and global rank theorem. Though primarily intended for graduate students of mathematics, the book ...

  19. Advanced methods of analysis variance on scenarios of nuclear prospective; Metodos avanzados de analisis de varianza en escenarios de prospectiva nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-07-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  20. MEANS AND METHODS OF CYBER WARFARE

    Directory of Open Access Journals (Sweden)

    Dan-Iulian VOITAȘEC

    2016-06-01

    Full Text Available According to the Declaration of Saint Petersburg of 1868 “the only legitimate object which States should endeavor to accomplish during war is to weaken the military forces of the enemy”. Thus, International Humanitarian Law prohibits or limits the use of certain means and methods of warfare. The rapid development of technology has led to the emergence of a new dimension of warfare. The cyber aspect of armed conflict has led to the development of new means and methods of warfare. The purpose of this paper is to study how the norms of international humanitarian law apply to the means and methods of cyber warfare.

  1. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  2. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Energy Technology Data Exchange (ETDEWEB)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  3. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    International Nuclear Information System (INIS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-01-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested

  4. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Science.gov (United States)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  5. Worst-case and smoothed analysis of $k$-means clustering with Bregman divergences

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko; Dong, Yingfei; Du, Dingzhu; Ibarra, Oscar

    2009-01-01

    The $k$-means algorithm is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice. Most of the theoretical work is restricted to the case that squared Euclidean distances are used as similarity measure. In many applications, however, data is to be

  6. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework

  7. Adaptive Smoothed Finite Elements (ASFEM) for history dependent material models

    International Nuclear Information System (INIS)

    Quak, W.; Boogaard, A. H. van den

    2011-01-01

    A successful simulation of a bulk forming process with finite elements can be difficult due to distortion of the finite elements. Nodal smoothed Finite Elements (NSFEM) are an interesting option for such a process since they show good distortion insensitivity and moreover have locking-free behavior and good computational efficiency. In this paper a method is proposed which takes advantage of the nodally smoothed field. This method, named adaptive smoothed finite elements (ASFEM), revises the mesh for every step of a simulation without mapping the history dependent material parameters. In this paper an updated-Lagrangian implementation is presented. Several examples are given to illustrate the method and to show its properties.

  8. Exploration of faint absorption bands in the reflectance spectra of the asteroids by method of optimal smoothing: Vestoids

    Science.gov (United States)

    Shestopalov, D. I.; McFadden, L. A.; Golubeva, L. F.

    2007-04-01

    An optimization method of smoothing noisy spectra was developed to investigate faint absorption bands in the visual spectral region of reflectance spectra of asteroids and the compositional information derived from their analysis. The smoothing algorithm is called "optimal" because the algorithm determines the best running box size to separate weak absorption bands from the noise. The method is tested for its sensitivity to identifying false features in the smoothed spectrum, and its correctness of forecasting real absorption bands was tested with artificial spectra simulating asteroid reflectance spectra. After validating the method we optimally smoothed 22 vestoid spectra from SMASS1 [Xu, Sh., Binzel, R.P., Burbine, T.H., Bus, S.J., 1995. Icarus 115, 1-35]. We show that the resulting bands are not telluric features. Interpretation of the absorption bands in the asteroid spectra was based on the spectral properties of both terrestrial and meteorite pyroxenes. The bands located near 480, 505, 530, and 550 nm we assigned to spin-forbidden crystal field bands of ferrous iron, whereas the bands near 570, 600, and 650 nm are attributed to the crystal field bands of trivalent chromium and/or ferric iron in low-calcium pyroxenes on the asteroids' surface. While not measured by microprobe analysis, Fe 3+ site occupancy can be measured with Mössbauer spectroscopy, and is seen in trace amounts in pyroxenes. We believe that trace amounts of Fe 3+ on vestoid surfaces may be due to oxidation from impacts by icy bodies. If that is the case, they should be ubiquitous in the asteroid belt wherever pyroxene absorptions are found. Pyroxene composition of four asteroids of our set is determined from the band position of absorptions at 505 and 1000 nm, implying that there can be orthopyroxenes in all range of ferruginosity on the vestoid surfaces. For the present we cannot unambiguously interpret of the faint absorption bands that are seen in the spectra of 4005 Dyagilev, 4038

  9. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  10. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  11. A Method for Calculating the Mean Orbits of Meteor Streams

    Science.gov (United States)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  12. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  13. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    Science.gov (United States)

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  14. An Efficient SDN Load Balancing Scheme Based on Variance Analysis for Massive Mobile Users

    Directory of Open Access Journals (Sweden)

    Hong Zhong

    2015-01-01

    Full Text Available In a traditional network, server load balancing is used to satisfy the demand for high data volumes. The technique requires large capital investment while offering poor scalability and flexibility, which difficultly supports highly dynamic workload demands from massive mobile users. To solve these problems, this paper analyses the principle of software-defined networking (SDN and presents a new probabilistic method of load balancing based on variance analysis. The method can be used to dynamically manage traffic flows for supporting massive mobile users in SDN networks. The paper proposes a solution using the OpenFlow virtual switching technology instead of the traditional hardware switching technology. A SDN controller monitors data traffic of each port by means of variance analysis and provides a probability-based selection algorithm to redirect traffic dynamically with the OpenFlow technology. Compared with the existing load balancing methods which were designed to support traditional networks, this solution has lower cost, higher reliability, and greater scalability which satisfy the needs of mobile users.

  15. Smooth polyhedral surfaces

    KAUST Repository

    Gü nther, Felix; Jiang, Caigui; Pottmann, Helmut

    2017-01-01

    Polyhedral surfaces are fundamental objects in architectural geometry and industrial design. Whereas closeness of a given mesh to a smooth reference surface and its suitability for numerical simulations were already studied extensively, the aim of our work is to find and to discuss suitable assessments of smoothness of polyhedral surfaces that only take the geometry of the polyhedral surface itself into account. Motivated by analogies to classical differential geometry, we propose a theory of smoothness of polyhedral surfaces including suitable notions of normal vectors, tangent planes, asymptotic directions, and parabolic curves that are invariant under projective transformations. It is remarkable that seemingly mild conditions significantly limit the shapes of faces of a smooth polyhedral surface. Besides being of theoretical interest, we believe that smoothness of polyhedral surfaces is of interest in the architectural context, where vertices and edges of polyhedral surfaces are highly visible.

  16. Smooth polyhedral surfaces

    KAUST Repository

    Günther, Felix

    2017-03-15

    Polyhedral surfaces are fundamental objects in architectural geometry and industrial design. Whereas closeness of a given mesh to a smooth reference surface and its suitability for numerical simulations were already studied extensively, the aim of our work is to find and to discuss suitable assessments of smoothness of polyhedral surfaces that only take the geometry of the polyhedral surface itself into account. Motivated by analogies to classical differential geometry, we propose a theory of smoothness of polyhedral surfaces including suitable notions of normal vectors, tangent planes, asymptotic directions, and parabolic curves that are invariant under projective transformations. It is remarkable that seemingly mild conditions significantly limit the shapes of faces of a smooth polyhedral surface. Besides being of theoretical interest, we believe that smoothness of polyhedral surfaces is of interest in the architectural context, where vertices and edges of polyhedral surfaces are highly visible.

  17. How large are actor and partner effects of personality on relationship satisfaction? The importance of controlling for shared method variance.

    Science.gov (United States)

    Orth, Ulrich

    2013-10-01

    Previous research suggests that the personality of a relationship partner predicts not only the individual's own satisfaction with the relationship but also the partner's satisfaction. Based on the actor-partner interdependence model, the present research tested whether actor and partner effects of personality are biased when the same method (e.g., self-report) is used for the assessment of personality and relationship satisfaction and, consequently, shared method variance is not controlled for. Data came from 186 couples, of whom both partners provided self- and partner reports on the Big Five personality traits. Depending on the research design, actor effects were larger than partner effects (when using only self-reports), smaller than partner effects (when using only partner reports), or of about the same size as partner effects (when using self- and partner reports). The findings attest to the importance of controlling for shared method variance in dyadic data analysis.

  18. Polarization beam smoothing for inertial confinement fusion

    International Nuclear Information System (INIS)

    Rothenberg, Joshua E.

    2000-01-01

    For both direct and indirect drive approaches to inertial confinement fusion (ICF) it is imperative to obtain the best possible drive beam uniformity. The approach chosen for the National Ignition Facility uses a random-phase plate to generate a speckle pattern with a precisely controlled envelope on target. A number of temporal smoothing techniques can then be employed to utilize bandwidth to rapidly change the speckle pattern, and thus average out the small-scale speckle structure. One technique which generally can supplement other smoothing methods is polarization smoothing (PS): the illumination of the target with two distinct and orthogonally polarized speckle patterns. Since these two polarizations do not interfere, the intensity patterns add incoherently, and the rms nonuniformity can be reduced by a factor of (√2). A number of PS schemes are described and compared on the basis of the aggregate rms and the spatial spectrum of the focused illumination distribution. The (√2) rms nonuniformity reduction of PS is present on an instantaneous basis and is, therefore, of particular interest for the suppression of laser plasma instabilities, which have a very rapid response time. When combining PS and temporal methods, such as smoothing by spectral dispersion (SSD), PS can reduce the rms of the temporally smoothed illumination by an additional factor of (√2). However, it has generally been thought that in order to achieve this reduction of (√2), the increased divergence of the beam from PS must exceed the divergence of SSD. It is also shown here that, over the time scales of interest to direct or indirect drive ICF, under some conditions PS can reduce the smoothed illumination rms by nearly (√2) even when the PS divergence is much smaller than that of SSD. (c) 2000 American Institute of Physics

  19. SmoothMoves : Smooth pursuits head movements for augmented reality

    NARCIS (Netherlands)

    Esteves, Augusto; Verweij, David; Suraiya, Liza; Islam, Rasel; Lee, Youryang; Oakley, Ian

    2017-01-01

    SmoothMoves is an interaction technique for augmented reality (AR) based on smooth pursuits head movements. It works by computing correlations between the movements of on-screen targets and the user's head while tracking those targets. The paper presents three studies. The first suggests that head

  20. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  1. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  2. Beyond mean-field approach to heavy-ion reactions around the Coulomb barrier

    Directory of Open Access Journals (Sweden)

    Ayik Sakir

    2011-10-01

    Full Text Available Dissipation and fluctuations of one-body observables in heavy-ion reactions around the Coulomb barrier are investigated with a microscopic stochastic mean-field approach. By projecting the stochastic meanfield dynamics on a suitable collective path, transport coefficients associated with the relative distance between colliding nuclei and a fragment mass are extracted. Although microscopic mean-field approach is know to underestimate the variance of fragment mass distribution, the description of the variance is much improved by the stochastic mean-field method. While fluctuations are consistent with the empirical (semiclassical analysis of the experimental data, concerning mean values of macroscopic variables the semiclassical description breaks down below the Coulomb barrier.

  3. Smoothing a Piecewise-Smooth: An Example from Plankton Population Dynamics

    DEFF Research Database (Denmark)

    Piltz, Sofia Helena

    2016-01-01

    In this work we discuss a piecewise-smooth dynamical system inspired by plankton observations and constructed for one predator switching its diet between two different types of prey. We then discuss two smooth formulations of the piecewise-smooth model obtained by using a hyperbolic tangent funct...... function and adding a dimension to the system. We compare model behaviour of the three systems and show an example case where the steepness of the switch is determined from a comparison with data on freshwater plankton....

  4. Wind power forecast error smoothing within a wind farm

    International Nuclear Information System (INIS)

    Saleck, Nadja; Bremen, Lueder von

    2007-01-01

    Smoothing of wind power forecast errors is well-known for large areas. Comparable effects within a wind farm are investigated in this paper. A Neural Network was taken to predict the power output of a wind farm in north-western Germany comprising 17 turbines. A comparison was done between an algorithm that fits mean wind and mean power data of the wind farm and a second algorithm that fits wind and power data individually for each turbine. The evaluation of root mean square errors (RMSE) shows that relative small smoothing effects occur. However, it can be shown for this wind farm that individual calculations have the advantage that only a few turbines are needed to give better results than the use of mean data. Furthermore different results occurred if predicted wind speeds are directly fitted to observed wind power or if predicted wind speeds are first fitted to observed wind speeds and then applied to a power curve. The first approach gives slightly better RMSE values, the bias improves considerably

  5. Multi-period fuzzy mean-semi variance portfolio selection problem with transaction cost and minimum transaction lots using genetic algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Barati

    2016-04-01

    Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.

  6. Fuzzy Logic Based Edge Detection in Smooth and Noisy Clinical Images.

    Directory of Open Access Journals (Sweden)

    Izhar Haq

    Full Text Available Edge detection has beneficial applications in the fields such as machine vision, pattern recognition and biomedical imaging etc. Edge detection highlights high frequency components in the image. Edge detection is a challenging task. It becomes more arduous when it comes to noisy images. This study focuses on fuzzy logic based edge detection in smooth and noisy clinical images. The proposed method (in noisy images employs a 3 × 3 mask guided by fuzzy rule set. Moreover, in case of smooth clinical images, an extra mask of contrast adjustment is integrated with edge detection mask to intensify the smooth images. The developed method was tested on noise-free, smooth and noisy images. The results were compared with other established edge detection techniques like Sobel, Prewitt, Laplacian of Gaussian (LOG, Roberts and Canny. When the developed edge detection technique was applied to a smooth clinical image of size 270 × 290 pixels having 24 dB 'salt and pepper' noise, it detected very few (22 false edge pixels, compared to Sobel (1931, Prewitt (2741, LOG (3102, Roberts (1451 and Canny (1045 false edge pixels. Therefore it is evident that the developed method offers improved solution to the edge detection problem in smooth and noisy clinical images.

  7. Semi-Smooth Newton Method for Solving 2D Contact Problems with Tresca and Coulomb Friction

    Directory of Open Access Journals (Sweden)

    Kristina Motyckova

    2013-01-01

    Full Text Available The contribution deals with contact problems for two elastic bodies with friction. After the description of the problem we present its discretization based on linear or bilinear finite elements. The semi--smooth Newton method is used to find the solution, from which we derive active sets algorithms. Finally, we arrive at the globally convergent dual implementation of the algorithms in terms of the Langrange multipliers for the Tresca problem. Numerical experiments conclude the paper.

  8. Mean field games

    KAUST Repository

    Gomes, Diogo A.

    2014-01-06

    In this talk we will report on new results concerning the existence of smooth solutions for time dependent mean-field games. This new result is established through a combination of various tools including several a-priori estimates for time-dependent mean-field games combined with new techniques for the regularity of Hamilton-Jacobi equations.

  9. Mean field games

    KAUST Repository

    Gomes, Diogo A.

    2014-01-01

    In this talk we will report on new results concerning the existence of smooth solutions for time dependent mean-field games. This new result is established through a combination of various tools including several a-priori estimates for time-dependent mean-field games combined with new techniques for the regularity of Hamilton-Jacobi equations.

  10. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    Energy Technology Data Exchange (ETDEWEB)

    Sevastianov, L. A., E-mail: sevast@sci.pfu.edu.ru [Peoples' Friendship University of Russia (Russian Federation); Egorov, A. A. [Russian Academy of Sciences, Prokhorov General Physics Institute (Russian Federation); Sevastyanov, A. L. [Peoples' Friendship University of Russia (Russian Federation)

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.

  11. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    International Nuclear Information System (INIS)

    Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.

    2013-01-01

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell’s equations is made to obey “inclined” boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of “entanglement” of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lüneburg lens.

  12. Ensuring dynamic load smoothness in problem of controlling Atomic Electric Power Stations exclusive mechanisms

    International Nuclear Information System (INIS)

    Shumilov, V.F.

    2003-01-01

    New methods for the investigation of automatic systems based on the inverse tasks of dynamics with the use of rational, trigonometric and polynomial spline functions are discussed. By means of SH function the technological regimes: start-up, steadiness, racing, braking, reverse, stop were determined. Procedure for the provision of dynamic load smoothness is suggested, and example of control over the transport systems for fuel load is considered [ru

  13. Use of an excess variance approach for the certification of reference materials by interlaboratory comparison

    International Nuclear Information System (INIS)

    Crozet, M.; Rigaux, C.; Roudil, D.; Tuffery, B.; Ruas, A.; Desenfant, M.

    2014-01-01

    In the nuclear field, the accuracy and comparability of analytical results are crucial to insure correct accountancy, good process control and safe operational conditions. All of these require reliable measurements based on reference materials whose certified values must be obtained by robust metrological approaches according to the requirements of ISO guides 34 and 35. The data processing of the characterization step is one of the key steps of a reference material production process. Among several methods, the use of interlaboratory comparison results for reference material certification is very common. The DerSimonian and Laird excess variance approach, described and implemented in this paper, is a simple and efficient method for the data processing of interlaboratory comparison results for reference material certification. By taking into account not only the laboratory uncertainties but also the spread of the individual results into the calculation of the weighted mean, this approach minimizes the risk to get biased certified values in the case where one or several laboratories either underestimate their measurement uncertainties or do not identify all measurement biases. This statistical method has been applied to a new CETAMA plutonium reference material certified by interlaboratory comparison and has been compared to the classical weighted mean approach described in ISO Guide 35. This paper shows the benefits of using an 'excess variance' approach for the certification of reference material by interlaboratory comparison. (authors)

  14. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    International Nuclear Information System (INIS)

    Westner, Guenther; Madlener, Reinhard

    2010-01-01

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  15. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    Energy Technology Data Exchange (ETDEWEB)

    Westner, Guenther; Madlener, Reinhard [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany)

    2010-12-15

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  16. Low-loss integrated electrical surface plasmon source with ultra-smooth metal film fabricated by polymethyl methacrylate ‘bond and peel’ method

    Science.gov (United States)

    Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun

    2018-06-01

    External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate ‘bond and peel’ method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.

  17. Application of Holt exponential smoothing and ARIMA method for data population in West Java

    Science.gov (United States)

    Supriatna, A.; Susanti, D.; Hertini, E.

    2017-01-01

    One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.

  18. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in ... 518501, India; Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur College of Engineering, Anantapur 515002, India ...

  19. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    Science.gov (United States)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  20. A practical look at Monte Carlo variance reduction methods in radiation shielding

    Energy Technology Data Exchange (ETDEWEB)

    Olsher, Richard H. [Los Alamos National Laboratory, Los Alamos (United States)

    2006-04-15

    With the advent of inexpensive computing power over the past two decades, applications of Monte Carlo radiation transport techniques have proliferated dramatically. At Los Alamos, the Monte Carlo codes MCNP5 and MCNPX are used routinely on personal computer platforms for radiation shielding analysis and dosimetry calculations. These codes feature a rich palette of Variance Reduction (VR) techniques. The motivation of VR is to exchange user efficiency for computational efficiency. It has been said that a few hours of user time often reduces computational time by several orders of magnitude. Unfortunately, user time can stretch into the many hours as most VR techniques require significant user experience and intervention for proper optimization. It is the purpose of this paper to outline VR strategies, tested in practice, optimized for several common radiation shielding tasks, with the hope of reducing user setup time for similar problems. A strategy is defined in this context to mean a collection of MCNP radiation transport physics options and VR techniques that work synergistically to optimize a particular shielding task. Examples are offered the areas of source definition, skyshine, streaming, and transmission.