International Nuclear Information System (INIS)
Morales P, J.R.; Avila P, P.
1996-01-01
If we have consider the maximum permissible levels showed for the case of oysters, it results forbidding to collect oysters at the four stations of the El Chijol Channel ( Veracruz, Mexico), as well as along the channel itself, because the metal concentrations studied exceed these limits. In this case the application of Welch tests were not necessary. For the water hyacinth the means of the treatments were unequal in Fe, Cu, Ni, and Zn. This case is more illustrative, for the conclusion has been reached through the application of the Welch tests to treatments with heterogeneous variances. (Author)
ANOVA parameters influence in LCF experimental data and simulation results
Directory of Open Access Journals (Sweden)
Vercelli A.
2010-06-01
Full Text Available The virtual design of components undergoing thermo mechanical fatigue (TMF and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation and the damage and life model (for life assessment. The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF tests, low cycle fatigue (LCF tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo
ANOVA and ANCOVA A GLM Approach
Rutherford, Andrew
2012-01-01
Provides an in-depth treatment of ANOVA and ANCOVA techniques from a linear model perspective ANOVA and ANCOVA: A GLM Approach provides a contemporary look at the general linear model (GLM) approach to the analysis of variance (ANOVA) of one- and two-factor psychological experiments. With its organized and comprehensive presentation, the book successfully guides readers through conventional statistical concepts and how to interpret them in GLM terms, treating the main single- and multi-factor designs as they relate to ANOVA and ANCOVA. The book begins with a brief history of the separate dev
A default Bayesian hypothesis test for ANOVA designs
Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.
2012-01-01
This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA
ANOVA for the behavioral sciences researcher
Cardinal, Rudolf N
2013-01-01
This new book provides a theoretical and practical guide to analysis of variance (ANOVA) for those who have not had a formal course in this technique, but need to use this analysis as part of their research.From their experience in teaching this material and applying it to research problems, the authors have created a summary of the statistical theory underlying ANOVA, together with important issues, guidance, practical methods, references, and hints about using statistical software. These have been organized so that the student can learn the logic of the analytical techniques but also use the
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Sequential experimental design based generalised ANOVA
Energy Technology Data Exchange (ETDEWEB)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
2016-07-15
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package
Chiou, Geoffrey Nelson
The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.
International Nuclear Information System (INIS)
Murthy, K.P.N.; Indira, R.
1986-01-01
An analytical formulation is presented for calculating the mean and variance of transmission for a model deep-penetration problem. With this formulation, the variance reduction characteristics of two biased Monte Carlo schemes are studied. The first is the usual exponential biasing wherein it is shown that the optimal biasing parameter depends sensitively on the scattering properties of the shielding medium. The second is a scheme that couples exponential biasing to the scattering angle biasing proposed recently. It is demonstrated that the coupled scheme performs better than exponential biasing
Shabani, Farzin; Kumar, Lalit; Solhjouy-fard, Samaneh
2017-08-01
The aim of this study was to have a comparative investigation and evaluation of the capabilities of correlative and mechanistic modeling processes, applied to the projection of future distributions of date palm in novel environments and to establish a method of minimizing uncertainty in the projections of differing techniques. The location of this study on a global scale is in Middle Eastern Countries. We compared the mechanistic model CLIMEX (CL) with the correlative models MaxEnt (MX), Boosted Regression Trees (BRT), and Random Forests (RF) to project current and future distributions of date palm ( Phoenix dactylifera L.). The Global Climate Model (GCM), the CSIRO-Mk3.0 (CS) using the A2 emissions scenario, was selected for making projections. Both indigenous and alien distribution data of the species were utilized in the modeling process. The common areas predicted by MX, BRT, RF, and CL from the CS GCM were extracted and compared to ascertain projection uncertainty levels of each individual technique. The common areas identified by all four modeling techniques were used to produce a map indicating suitable and unsuitable areas for date palm cultivation for Middle Eastern countries, for the present and the year 2100. The four different modeling approaches predict fairly different distributions. Projections from CL were more conservative than from MX. The BRT and RF were the most conservative methods in terms of projections for the current time. The combination of the final CL and MX projections for the present and 2100 provide higher certainty concerning those areas that will become highly suitable for future date palm cultivation. According to the four models, cold, hot, and wet stress, with differences on a regional basis, appears to be the major restrictions on future date palm distribution. The results demonstrate variances in the projections, resulting from different techniques. The assessment and interpretation of model projections requires reservations
Permutation Tests for Stochastic Ordering and ANOVA
Basso, Dario; Salmaso, Luigi; Solari, Aldo
2009-01-01
Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses
Prediction and Control of Cutting Tool Vibration in Cnc Lathe with Anova and Ann
Directory of Open Access Journals (Sweden)
S. S. Abuthakeer
2011-06-01
Full Text Available Machining is a complex process in which many variables can deleterious the desired results. Among them, cutting tool vibration is the most critical phenomenon which influences dimensional precision of the components machined, functional behavior of the machine tools and life of the cutting tool. In a machining operation, the cutting tool vibrations are mainly influenced by cutting parameters like cutting speed, depth of cut and tool feed rate. In this work, the cutting tool vibrations are controlled using a damping pad made of Neoprene. Experiments were conducted in a CNC lathe where the tool holder is supported with and without damping pad. The cutting tool vibration signals were collected through a data acquisition system supported by LabVIEW software. To increase the buoyancy and reliability of the experiments, a full factorial experimental design was used. Experimental data collected were tested with analysis of variance (ANOVA to understand the influences of the cutting parameters. Empirical models have been developed using analysis of variance (ANOVA. Experimental studies and data analysis have been performed to validate the proposed damping system. Multilayer perceptron neural network model has been constructed with feed forward back-propagation algorithm using the acquired data. On the completion of the experimental test ANN is used to validate the results obtained and also to predict the behavior of the system under any cutting condition within the operating range. The onsite tests show that the proposed system reduces the vibration of cutting tool to a greater extend.
Small Area Variance Estimation for the Siuslaw NF in Oregon and Some Results
S. Lin; D. Boes; H.T. Schreuder
2006-01-01
The results of a small area prediction study for the Siuslaw National Forest in Oregon are presented. Predictions were made for total basal area, number of trees and mortality per ha on a 0.85 mile grid using data on a 1.7 mile grid and additional ancillary information from TM. A reliable method of estimating prediction errors for individual plot predictions called the...
Soh, BaoLin P.; Lee, Warwick B.; Wong, Jill; Sim, Llewellyn; Hillis, Stephen L.; Tapia, Kriscia A.; Brennan, Patrick C.
2016-03-01
Aim: To compare the performance of Australian and Singapore breast readers interpreting a single test-set that consisted of mammographic examinations collected from the Australian population. Background: In the teleradiology era, breast readers are interpreting mammographic examinations from different populations. The question arises whether two groups of readers with similar training backgrounds, demonstrate the same level of performance when presented with a population familiar only to one of the groups. Methods: Fifty-three Australian and 15 Singaporean breast radiologists participated in this study. All radiologists were trained in mammogram interpretation and had a median of 9 and 15 years of experience in reading mammograms respectively. Each reader interpreted the same BREAST test-set consisting of sixty de-identified mammographic examinations arising from an Australian population. Performance parameters including JAFROC, ROC, case sensitivity as well as specificity were compared between Australian and Singaporean readers using a Mann Whitney U test. Results: A significant difference (P=0.036) was demonstrated between the JAFROC scores of the Australian and Singaporean breast radiologists. No other significant differences were observed. Conclusion: JAFROC scores for Australian radiologists were higher than those obtained by the Singaporean counterparts. Whilst it is tempting to suggest this is down to reader expertise, this may be a simplistic explanation considering the very similar training and audit backgrounds of the two populations of radiologists. The influence of reading images that are different from those that radiologists normally encounter cannot be ruled out and requires further investigation, particularly in the light of increasing international outsourcing of radiologic reporting.
ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.
Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.
2011-01-01
ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.
Application of one-way ANOVA in completely randomized experiments
Wahid, Zaharah; Izwan Latiff, Ahmad; Ahmad, Kartini
2017-12-01
This paper describes an application of a statistical technique one-way ANOVA in completely randomized experiments with three replicates. This technique was employed to a single factor with four levels and multiple observations at each level. The aim of this study is to investigate the relationship between chemical oxygen demand index and location on-sites. Two different approaches are employed for the analyses; critical value and p-value. It also presents key assumptions of the technique to be satisfied by the data in order to obtain valid results. Pairwise comparisons by Turkey method are also considered and discussed to determine where the significant differences among the means is after the ANOVA has been performed. The results revealed that there are statistically significant relationship exist between the chemical oxygen demand index and the location on-sites.
Cautionary Note on Reporting Eta-Squared Values from Multifactor ANOVA Designs
Pierce, Charles A.; Block, Richard A.; Aguinis, Herman
2004-01-01
The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors…
Use of "t"-Test and ANOVA in Career-Technical Education Research
Rojewski, Jay W.; Lee, In Heok; Gemici, Sinan
2012-01-01
Use of t-tests and analysis of variance (ANOVA) procedures in published research from three scholarly journals in career and technical education (CTE) during a recent 5-year period was examined. Information on post hoc analyses, reporting of effect size, alpha adjustments to account for multiple tests, power, and examination of assumptions…
An ANOVA approach for statistical comparisons of brain networks.
Fraiman, Daniel; Fraiman, Ricardo
2018-03-16
The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.
National Research Council Canada - National Science Library
Curran, Thomas; Schimpff, Joshua J
2008-01-01
.... The variance analysis between budgeted (projected) and actual financial results was performed on financial data collected on the E-2C aircraft program from Fleet Readiness Center Southwest (FRCSW...
ANOVA Based Approch for Efficient Customer Recognition: Dealing with Common Names
Saberi , Morteza; Saberi , Zahra
2015-01-01
Part 2: Artificial Intelligence for Knowledge Management; International audience; This study proposes an Analysis of Variance (ANOVA) technique that focuses on the efficient recognition of customers with common names. The continuous improvement of Information and communications technologies (ICT) has led customers to have new expectations and concerns from their related organization. These new expectations bring various difficulties for organizations’ help desk to meet their customers’ needs....
Batch variation between branchial cell cultures: An analysis of variance
DEFF Research Database (Denmark)
Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.
2003-01-01
We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...
International Nuclear Information System (INIS)
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.
2011-01-01
Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic
ANOVA-HDMR structure of the higher order nodal diffusion solution
International Nuclear Information System (INIS)
Bokov, P. M.; Prinsloo, R. H.; Tomasevic, D. I.
2013-01-01
Nodal diffusion methods still represent a standard in global reactor calculations, but employ some ad-hoc approximations (such as the quadratic leakage approximation) which limit their accuracy in cases where reference quality solutions are sought. In this work we solve the nodal diffusion equations utilizing the so-called higher-order nodal methods to generate reference quality solutions and to decompose the obtained solutions via a technique known as High Dimensional Model Representation (HDMR). This representation and associated decomposition of the solution provides a new formulation of the transverse leakage term. The HDMR structure is investigated via the technique of Analysis of Variance (ANOVA), which indicates why the existing class of transversely-integrated nodal methods prove to be so successful. Furthermore, the analysis leads to a potential solution method for generating reference quality solutions at a much reduced calculational cost, by applying the ANOVA technique to the full higher order solution. (authors)
Directory of Open Access Journals (Sweden)
Athanasios Chasiotis
2014-04-01
Full Text Available We investigated the effect of the childhood context variables number of siblings (study 1 and 2 and parental SES (study 2 on implicit parenting motivation across six cultural samples, including Africa (2xCameroon, Asia (PR China, Europe (2xGermany, and Latin America (Costa Rica. Implicit parenting motivation was assessed using an instrument measuring implicit motives (OMT, Operant Multimotive Test; Kuhl and Scheffer, 2001. Replicating and extending results from previous studies, regression analyses and structural equation models show that the number of siblings and parental SES explain a large amount of cultural variance, ranging from 64% to 82% of the cultural variance observed in implicit parenting motivation. Results are discussed within the framework of evolutionary developmental psychology.
Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism
Arias-Castro, Ery; Candès, Emmanuel J.; Plan, Yaniv
2011-01-01
Testing for the significance of a subset of regression coefficients in a linear model, a staple of statistical analysis, goes back at least to the work of Fisher who introduced the analysis of variance (ANOVA). We study this problem under the assumption that the coefficient vector is sparse, a common situation in modern high-dimensional settings. Suppose we have $p$ covariates and that under the alternative, the response only depends upon the order of $p^{1-\\alpha}$ of those, $0\\le\\alpha\\le1$...
DEFF Research Database (Denmark)
Khan, Nasim Ahmed; Spencer, Horace Jack; Nikiphorou, Elena
2017-01-01
Objective: To assess intercentre variability in the ACR core set measures, DAS28 based on three variables (DAS28v3) and Routine Assessment of Patient Index Data 3 in a multinational study. Methods: Seven thousand and twenty-three patients were recruited (84 centres; 30 countries) using a standard...... built to adjust for the remaining ACR core set measure (for each ACR core set measure or each composite index), socio-demographics and medical characteristics. ANOVA and analysis of covariance models yielded similar results, and ANOVA tables were used to present variance attributable to recruiting...... centre. Results: The proportion of variances attributable to recruiting centre was lower for patient reported outcomes (PROs: pain, HAQ, patient global) compared with objective measures (joint counts, ESR, physician global) in all models. In the full model, variance in PROs attributable to recruiting...
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)
2016-09-15
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
International Nuclear Information System (INIS)
Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Directory of Open Access Journals (Sweden)
Mohammad Manir Hossain Mollah
Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large
DEFF Research Database (Denmark)
Shojaee Nasirabadi, Parizad; Conseil, Helene; Mohanty, Sankhya
2016-01-01
Electronic systems are exposed to harsh environmental conditions such as high humidity in many applications. Moisture transfer into electronic enclosures and condensation can cause several problems as material degradation and corrosion. Therefore, it is important to control the moisture content...... and the relative humidity inside electronic enclosures. In this work, moisture transfer into a typical polycarbonate electronic enclosure with a cylindrical shape opening is studied. The effects of four influential parameters namely, initial relative humidity inside the enclosure, radius and length of the opening...... and temperature are studied. A set of experiments are done based on a fractional factorial design in order to estimate the time constant for moisture transfer into the enclosure by fitting the experimental data to an analytical quasi-steady-state model. According to the statistical analysis, temperature...
Hidden multiplicity in exploratory multiway ANOVA: Prevalence and remedies
Cramer, A.O.J.; van Ravenzwaaij, D.; Matzke, D.; Steingroever, H.; Wetzels, R.; Grasman, R.P.P.P.; Waldorp, L.J.; Wagenmakers, E.-J.
2016-01-01
Many psychologists do not realize that exploratory use of the popular multiway analysis of variance harbors a multiple-comparison problem. In the case of two factors, three separate null hypotheses are subject to test (i.e., two main effects and one interaction). Consequently, the probability of at
Local variances in biomonitoring
International Nuclear Information System (INIS)
Wolterbeek, H.Th; Verburg, T.G.
2001-01-01
The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)
Braatveit, Kirsten J; Torsheim, Torbjørn; Hove, Oddbjørn
2018-01-01
To investigate the direct effect of different childhood difficulties on adult intelligence coefficient (IQ) and their possible indirect effect through the mediating pathways of education and severity substance use. Ninety in-patients aged 19-64. The participants had abstained from substance use for at least 6 weeks and had different substance use profiles. Substance use disorder (SUD) and psychiatric illnesses were diagnosed according to the International Classification of Diseases 10th edition criteria. IQ was measured with the Wechsler Adult Intelligence Scale, 4th edition. Childhood difficulties, severity of substance use and level of education were assessed through a self-report questionnaire. Mean full scale IQ for the studied population was 87.3. Learning and attention deficit/hyperactivity difficulties in childhood were directly related to adult IQ. Education had a mediating effect between childhood learning difficulties/conduct problems and the verbal comprehension index. There was no significant difference in IQ due to the specific substance used or severity of substance use. IQ variance in in-treatment individuals with SUD was related to childhood functioning alone or through the mediator of education. Substance-related factors did not contribute to IQ variance. The results fit a normal theory of IQ development with commonly known risk factors and no disturbing effect of substance use. © 2018 S. Karger AG, Basel.
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
Analysis of Variance in Statistical Image Processing
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
MCNP variance reduction overview
International Nuclear Information System (INIS)
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code
Directory of Open Access Journals (Sweden)
Lazic Stanley E
2008-07-01
Full Text Available Abstract Background Analysis of variance (ANOVA is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis – even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. Results An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159. This will be true in general, and the reasons for this are discussed. Conclusion There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.
Estimation of measurement variances
International Nuclear Information System (INIS)
Anon.
1981-01-01
In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time
Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA
Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.
2018-03-01
Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.
Estimating linear effects in ANOVA designs: the easy way.
Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana
2012-09-01
Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
Li, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated
Directory of Open Access Journals (Sweden)
Kirankumar B. Balavalad
2017-04-01
Full Text Available Piezoresistive (PZR pressure sensors have gained importance because of their robust construction, high sensitivity and good linearity. The conventional PZR pressure sensor consists of 4 piezoresistors placed on diaphragm and are connected in the form of Wheatstone bridge. These sensors convert stress applied on them into change in resistance, which is quantified into voltage using Wheatstone bridge mechanism. It is observed form the literature that, the dimensions of piezoresistors are very crucial in the performance of the piezoresistive pressure sensor. This paper presents, a novel mechanism of finding best combinations and effect of individual piezoresistors dimensions viz., Length, Width and Thickness, using DoE and ANOVA (Analysis of Variance method, following Taguchi experimentation approach. The paper presents a unique method to find optimum combination of piezoresistors dimensions and also clearly illustrates the effect the dimensions on the output of the sensor. The optimum combinations and the output response of sensor is predicted using DoE and the validation simulation is done. The result of the validation simulation is compared with the predicted value of sensor response i.e., V. Predicted value of V is 1.074 V and the validation simulation gave the response for V as 1.19 V. This actually validates that the model (DoE and ANOVA is adequate in describing V in terms of the variables defined.
Directory of Open Access Journals (Sweden)
Morimoto Aya
2010-05-01
Full Text Available Abstract Objective To evaluate glycemic variability associated with two different premixed insulin analogue formulations when used in a twice-daily regimen. Patients and Methods Subjects comprised type 2 diabetic patients aged 20-79 years, treated with twice daily premixed insulin or insulin analogue formulations. All subjects were hospitalized for 6 days and randomized to receive either Humalog Mix 25 (Mix 25 or Humalog Mix 50 (Mix 50. They were then crossed over to the other arm between day 3 and day 4 of the study. Continuous glucose monitoring (CGM was performed on all subjects to examine the differences in glycemic variability. Results Eleven type 2 diabetic patients were enrolled. No significant difference was found in 24-hour mean glucose values and their SDs, pre-meal glucose values, increases from pre-meal to peak glucose values, or time to peak glucose levels between either group. However, the mean glucose values observed during 0-8 hrs were significantly lower with Mix 25 compared to Mix 50 (128 vs. 147 mg/dL; p = 0.024. Conclusions The twice-daily Mix 25 regimen provided superior overnight glycemic control compared to the Mix 50 regimen in Japanese patients with type 2 diabetes. However, both twice-daily regimens with either Mix 25 or Mix 50 provided inadequate post-lunch glycemic control. Trial Registration Current Controlled Trials UMIN000001327
Biomarker Detection in Association Studies: Modeling SNPs Simultaneously via Logistic ANOVA
Jung, Yoonsuh; Huang, Jianhua Z.; Hu, Jianhua
2014-01-01
In genome-wide association studies, the primary task is to detect biomarkers in the form of Single Nucleotide Polymorphisms (SNPs) that have nontrivial associations with a disease phenotype and some other important clinical/environmental factors. However, the extremely large number of SNPs comparing to the sample size inhibits application of classical methods such as the multiple logistic regression. Currently the most commonly used approach is still to analyze one SNP at a time. In this paper, we propose to consider the genotypes of the SNPs simultaneously via a logistic analysis of variance (ANOVA) model, which expresses the logit transformed mean of SNP genotypes as the summation of the SNP effects, effects of the disease phenotype and/or other clinical variables, and the interaction effects. We use a reduced-rank representation of the interaction-effect matrix for dimensionality reduction, and employ the L 1-penalty in a penalized likelihood framework to filter out the SNPs that have no associations. We develop a Majorization-Minimization algorithm for computational implementation. In addition, we propose a modified BIC criterion to select the penalty parameters and determine the rank number. The proposed method is applied to a Multiple Sclerosis data set and simulated data sets and shows promise in biomarker detection.
Biomarker Detection in Association Studies: Modeling SNPs Simultaneously via Logistic ANOVA
Jung, Yoonsuh
2014-10-02
In genome-wide association studies, the primary task is to detect biomarkers in the form of Single Nucleotide Polymorphisms (SNPs) that have nontrivial associations with a disease phenotype and some other important clinical/environmental factors. However, the extremely large number of SNPs comparing to the sample size inhibits application of classical methods such as the multiple logistic regression. Currently the most commonly used approach is still to analyze one SNP at a time. In this paper, we propose to consider the genotypes of the SNPs simultaneously via a logistic analysis of variance (ANOVA) model, which expresses the logit transformed mean of SNP genotypes as the summation of the SNP effects, effects of the disease phenotype and/or other clinical variables, and the interaction effects. We use a reduced-rank representation of the interaction-effect matrix for dimensionality reduction, and employ the L 1-penalty in a penalized likelihood framework to filter out the SNPs that have no associations. We develop a Majorization-Minimization algorithm for computational implementation. In addition, we propose a modified BIC criterion to select the penalty parameters and determine the rank number. The proposed method is applied to a Multiple Sclerosis data set and simulated data sets and shows promise in biomarker detection.
Constrained statistical inference: sample-size tables for ANOVA and regression
Directory of Open Access Journals (Sweden)
Leonard eVanbrabant
2015-01-01
Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.
Estimation of measurement variances
International Nuclear Information System (INIS)
Jaech, J.L.
1984-01-01
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented
Analysis of half diallel mating designs I: a practical analysis procedure for ANOVA approximation.
G.R. Johnson; J.N. King
1998-01-01
Procedures to analyze half-diallel mating designs using the SAS statistical package are presented. The procedure requires two runs of PROC and VARCOMP and results in estimates of additive and non-additive genetic variation. The procedures described can be modified to work on most statistical software packages which can compute variance component estimates. The...
Generic framework for high-dimensional fixed-effects ANOVA
Smilde, A.K.; Timmerman, M.E.; Hendriks, M.M.W.B.; Jansen, J.J.; Hoefsloot, H.C.J.
2012-01-01
In functional genomics it is more rule than exception that experimental designs are used to generate the data. The samples of the resulting data sets are thus organized according to this design and for each sample many biochemical compounds are measured, e.g. typically thousands of gene-expressions
Generic framework for high-dimensional fixed-effects ANOVA
Smilde, Age K.; Timmerman, Marieke E.; Hendriks, Margriet M. W. B.; Jansen, Jeroen J.; Hoefsloot, Huub C. J.
In functional genomics it is more rule than exception that experimental designs are used to generate the data. The samples of the resulting data sets are thus organized according to this design and for each sample many biochemical compounds are measured, e.g. typically thousands of gene-expressions
AnovArray: a set of SAS macros for the analysis of variance of gene expression data
Directory of Open Access Journals (Sweden)
Renard Jean-Paul
2005-06-01
Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.
Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data
Luo, Zhen
In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.
WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.
Grech, Victor
2018-06-01
Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.
Restricted Variance Interaction Effects
DEFF Research Database (Denmark)
Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.
2018-01-01
Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...
Forty-one samples of skim milk powder (SMP) and non-fat dry milk (NFDM) from 8 suppliers, 13 production sites, and 3 processing temperatures were analyzed by NIR diffuse reflectance spectrometry over a period of three days. NIR reflectance spectra (1700-2500 nm) were converted to pseudo-absorbance ...
Evolution of Genetic Variance during Adaptive Radiation.
Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel
2018-04-01
Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.
Efficient Cardinality/Mean-Variance Portfolios
Brito, R. Pedro; Vicente, Luís Nunes
2014-01-01
International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...
Local variances in biomonitoring
International Nuclear Information System (INIS)
Wolterbeek, H.T.
1999-01-01
The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
Expected Stock Returns and Variance Risk Premia
DEFF Research Database (Denmark)
Bollerslev, Tim; Zhou, Hao
risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
ANOVA IN MARKETING RESEARCH OF CONSUMER BEHAVIOR OF DIFFERENT CATEGORIES IN GEORGIAN MARKET
Directory of Open Access Journals (Sweden)
NUGZAR TODUA
2015-03-01
Full Text Available Consumer behavior research was conducted on bank services and (non-alcohol soft drinks. Based on four different currencies and ten services there are analyses made on bank clients’ distribution by bank services and currencies, percentage distribution by bank services, percentage distribution of bank services by currencies. Similar results are also received in case of ten soft drinks with their five characteristics: consumers quantities split by types of soft drinks and attributes; Attributes percentage split by types of soft drinks; Types of soft drinks percentage split by attributes. With usage of ANOVA, based on the marketing research outcomes it is concluded that bank clients’ total quantities i.e. populations’ unknown mean scores do not differ from each other. In the soft drinks research case consumers’ total quantities i.e. populations’ unknown mean scores vary by characteristics
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The
Spectral Ambiguity of Allan Variance
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
International Nuclear Information System (INIS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-01-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Energy Technology Data Exchange (ETDEWEB)
Tang, Kunkun, E-mail: ktg@illinois.edu [The Center for Exascale Simulation of Plasma-Coupled Combustion (XPACC), University of Illinois at Urbana–Champaign, 1308 W Main St, Urbana, IL 61801 (United States); Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Congedo, Pietro M. [Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence (France); Abgrall, Rémi [Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
Energy Technology Data Exchange (ETDEWEB)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Validation of consistency of Mendelian sampling variance.
Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H
2018-03-01
Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Levine's guide to SPSS for analysis of variance
Braver, Sanford L; Page, Melanie
2003-01-01
A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi
DEFF Research Database (Denmark)
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
A Mean variance analysis of arbitrage portfolios
Fang, Shuhong
2007-03-01
Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Group-wise ANOVA simultaneous component analysis for designed omics experiments
Saccenti, Edoardo; Smilde, Age K.; Camacho, José
2018-01-01
Introduction: Modern omics experiments pertain not only to the measurement of many variables but also follow complex experimental designs where many factors are manipulated at the same time. This data can be conveniently analyzed using multivariate tools like ANOVA-simultaneous component analysis
Investigation of flood pattern using ANOVA statistic and remote sensing in Malaysia
International Nuclear Information System (INIS)
Ya'acob, Norsuzila; Ismail, Nor Syazwani; Mustafa, Norfazira; Yusof, Azita Laily
2014-01-01
Flood is an overflow or inundation that comes from river or other body of water and causes or threatens damages. In Malaysia, there are no formal categorization of flood but often broadly categorized as monsoonal, flash or tidal floods. This project will be focus on flood causes by monsoon. For the last few years, the number of extreme flood was occurred and brings great economic impact. The extreme weather pattern is the main sector contributes for this phenomenon. In 2010, several districts in the states of Kedah neighbour-hoods state have been hit by floods and it is caused by tremendous weather pattern. During this tragedy, the ratio of the rainfalls volume was not fixed for every region, and the flood happened when the amount of water increase rapidly and start to overflow. This is the main objective why this project has been carried out, and the analysis data has been done from August until October in 2010. The investigation was done to find the possibility correlation pattern parameters related to the flood. ANOVA statistic was used to calculate the percentage of parameters was involved and Regression and correlation calculate the strength of coefficient among parameters related to the flood while remote sensing image was used for validation between the calculation accuracy. According to the results, the prediction is successful as the coefficient of relation in flood event is 0.912 and proved by Terra-SAR image on 4th November 2010. The rates of change in weather pattern give the impact to the flood
Van Hoten, Hendri; Gunawarman; Mulyadi, Ismet Hari; Kurniawan Mainil, Afdhal; Putra, Bismantoloa dan
2018-02-01
This research is about manufacture nanopowder Bioceramics from local materials used Ball Milling for biomedical applications. Source materials for the manufacture of medicines are plants, animal tissues, microbial structures and engineering biomaterial. The form of raw material medicines is a powder before mixed. In the case of medicines, research is to find sources of biomedical materials that will be in the nanoscale powders can be used as raw material for medicine. One of the biomedical materials that can be used as raw material for medicine is of the type of bioceramics is chicken eggshells. This research will develop methods for manufacture nanopowder material from chicken eggshells with Ball Milling using the Taguchi method and ANOVA. Eggshell milled using a variation of Milling rate on 150, 200 and 250 rpm, the time variation of 1, 2 and 3 hours and variations the grinding balls to eggshell powder weight ratio (BPR) 1: 6, 1: 8, 1: 10. Before milled with Ball Milling crushed eggshells in advance and calcinate to a temperature of 900°C. After the milled material characterization of the fine powder of eggshell using SEM to see its size. The result of this research is optimum parameter of Taguchi Design analysis that is 250 rpm milling rate, 3 hours milling time and BPR is 1: 6 with the average eggshell powder size is 1.305 μm. Milling speed, milling time and ball to powder weight of ratio have contribution successively equal to 60.82%, 30.76% and 6.64% by error equal to 1.78%.
Variance-based sensitivity indices for models with dependent inputs
International Nuclear Information System (INIS)
Mara, Thierry A.; Tarantola, Stefano
2012-01-01
Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.
Genetic variants influencing phenotypic variance heterogeneity.
Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa
2018-03-01
Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.
DEFF Research Database (Denmark)
Casas, Isabel; Mao, Xiuping; Veiga, Helena
This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...
Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.
Gutiérrez, David; Ramírez-Moreno, Mauricio A
2016-04-01
We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.
INFLUENCE OF TECHNOLOGICAL PARAMETERS ON AGROTEXTILES WATER ABSORBENCY USING ANOVA MODEL
Directory of Open Access Journals (Sweden)
LUPU Iuliana G.
2016-05-01
Full Text Available Agrotextiles are now days extensively being used in horticulture, farming and other agricultural activities. Agriculture and textiles are the largest industries in the world providing basic needs such as food and clothing. Agrotextiles plays a significant role to help control environment for crop protection, eliminate variations in climate, weather change and generate optimum condition for plant growth. Water absorptive capacity is a very important property of needle-punched nonwovens used as irrigation substrate in horticulture. Nonwovens used as watering substrate distribute water uniformly and act as slight water buffer owing to the absorbent capacity. The paper analyzes the influence of needling process parameters on water absorptive capacity of needle-punched nonwovens by using ANOVA model. The model allows the identification of optimal action parameters in a shorter time and with less material expenses than by experimental research. The frequency of needle board and needle depth penetration has been used as independent variables while the water absorptive capacity as dependent variable for ANOVA regression model. Based on employed ANOVA model we have established that there is a significant influence of needling parameters on water absorbent capacity. The higher of depth needle penetration and needle board frequency, the higher is the compactness of fabric. A less porous structure has a lower water absorptive capacity.
International Nuclear Information System (INIS)
Kanna, S.; Kumaraswamidhs, L. A.; Kumaran, S. Senthil
2016-01-01
The aim of the present work is to optimize the Friction welding of tube to tube plate using an external tool (FWTPET) with clearance fit of commercial aluminum tube to Al 2025 tube plate using an external tool. Conventional frictional welding is suitable to weld only symmetrical joints either tube to tube or rod to rod but in this research with the help of external tool, the welding has been done by unsymmetrical shape of tube to tube plate also. In this investigation, the various welding parameters such as tool rotating speed (rpm), projection of tube (mm) and depth of cut (mm) are determined according to the Taguchi L9 orthogonal array. The two conditions were considered in this process to examine this experiment; where condition 1 is flat plate with plain tube Without holes [WOH] on the circumference of the surface and condition 2 is flat plate with plane tube has holes on its circumference of the surface With holes [WH]. Taguchi L9 orthogonal array was utilized to find the most significant control factors which will yield better joint strength. Besides, the most influential process parameter has been determined using statistical Analysis of variance (ANOVA). Finally, the comparison of each result has been done for conditions by means percentage of contribution and regression analysis. The general regression equation is formulated and better strength is obtained and it is validated by means of confirmation test. It was observed that value of optimal welded joint strength for both tube without holes and tube with holes are to be 319.485 MPa and 264.825 MPa, respectively.
Energy Technology Data Exchange (ETDEWEB)
Kanna, S.; Kumaraswamidhs, L. A. [Indian Institute of Technology, Dhanbad (India); Kumaran, S. Senthil [RVS School of Engineering and Technology, Dindigul (India)
2016-05-15
The aim of the present work is to optimize the Friction welding of tube to tube plate using an external tool (FWTPET) with clearance fit of commercial aluminum tube to Al 2025 tube plate using an external tool. Conventional frictional welding is suitable to weld only symmetrical joints either tube to tube or rod to rod but in this research with the help of external tool, the welding has been done by unsymmetrical shape of tube to tube plate also. In this investigation, the various welding parameters such as tool rotating speed (rpm), projection of tube (mm) and depth of cut (mm) are determined according to the Taguchi L9 orthogonal array. The two conditions were considered in this process to examine this experiment; where condition 1 is flat plate with plain tube Without holes [WOH] on the circumference of the surface and condition 2 is flat plate with plane tube has holes on its circumference of the surface With holes [WH]. Taguchi L9 orthogonal array was utilized to find the most significant control factors which will yield better joint strength. Besides, the most influential process parameter has been determined using statistical Analysis of variance (ANOVA). Finally, the comparison of each result has been done for conditions by means percentage of contribution and regression analysis. The general regression equation is formulated and better strength is obtained and it is validated by means of confirmation test. It was observed that value of optimal welded joint strength for both tube without holes and tube with holes are to be 319.485 MPa and 264.825 MPa, respectively.
Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H.; Nassir, Mohamed H.; Al-Amiery, Ahmed A.
The variation of the results of the mechanical properties of halloysite nanotubes (HNTs) reinforced thermoplastic polyurethane (TPU) at different HNTs loadings was implemented as a tool for analysis. The preparation of HNTs-TPU nanocomposites was performed under four controlled parameters of mixing temperature, mixing speed, mixing time, and HNTs loading at three levels each to satisfy Taguchi method orthogonal array L9 aiming to optimize these parameters for the best measurements of tensile strength, Young's modulus, and tensile strain (known as responses). The maximum variation of the experimental results for each response was determined and analysed based on the optimized results predicted by Taguchi method and ANOVA. It was found that the maximum absolute variations of the three mentioned responses are 69%, 352%, and 126%, respectively. The analysis has shown that the preparation of the optimized tensile strength requires 1 wt.% HNTs loading (excluding 2 wt.% and 3 wt.%), mixing temperature of 190 °C (excluding 200 °C and 210 °C), and mixing speed of 30 rpm (excluding 40 rpm and 50 rpm). In addition, the analysis has determined that the mixing time at 20 min has no effect on the preparation. The mentioned analysis was fortified by ANOVA, images of FESEM, and DSC results. Seemingly, the agglomeration and distribution of HNTs in the nanocomposite play an important role in the process. The outcome of the analysis could be considered as a very important step towards the reliability of Taguchi method.
Means and Variances without Calculus
Kinney, John J.
2005-01-01
This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.
Visualizing Experimental Designs for Balanced ANOVA Models using Lisp-Stat
Directory of Open Access Journals (Sweden)
Philip W. Iversen
2004-12-01
Full Text Available The structure, or Hasse, diagram described by Taylor and Hilton (1981, American Statistician provides a visual display of the relationships between factors for balanced complete experimental designs. Using the Hasse diagram, rules exist for determining the appropriate linear model, ANOVA table, expected means squares, and F-tests in the case of balanced designs. This procedure has been implemented in Lisp-Stat using a software representation of the experimental design. The user can interact with the Hasse diagram to add, change, or delete factors and see the effect on the proposed analysis. The system has potential uses in teaching and consulting.
Estimating quadratic variation using realized variance
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Cramer, Angélique O.J.; van Ravenzwaaij, Don; Matzke, Dora; Steingroever, Helen; Wetzels, Ruud; Grasman, Raoul P.P.P.; Waldorp, Lourens J.; Wagenmakers, Eric-Jan
Many psychologists do not realize that exploratory use of the popular multiway analysis of variance harbors a multiple-comparison problem. In the case of two factors, three separate null hypotheses are subject to test (i.e., two main effects and one interaction). Consequently, the probability of at
Revision: Variance Inflation in Regression
Directory of Open Access Journals (Sweden)
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Directory of Open Access Journals (Sweden)
Tayser Sumer Gaaz
Full Text Available The variation of the results of the mechanical properties of halloysite nanotubes (HNTs reinforced thermoplastic polyurethane (TPU at different HNTs loadings was implemented as a tool for analysis. The preparation of HNTs-TPU nanocomposites was performed under four controlled parameters of mixing temperature, mixing speed, mixing time, and HNTs loading at three levels each to satisfy Taguchi method orthogonal array L9 aiming to optimize these parameters for the best measurements of tensile strength, Young’s modulus, and tensile strain (known as responses. The maximum variation of the experimental results for each response was determined and analysed based on the optimized results predicted by Taguchi method and ANOVA. It was found that the maximum absolute variations of the three mentioned responses are 69%, 352%, and 126%, respectively. The analysis has shown that the preparation of the optimized tensile strength requires 1 wt.% HNTs loading (excluding 2 wt.% and 3 wt.%, mixing temperature of 190 °C (excluding 200 °C and 210 °C, and mixing speed of 30 rpm (excluding 40 rpm and 50 rpm. In addition, the analysis has determined that the mixing time at 20 min has no effect on the preparation. The mentioned analysis was fortified by ANOVA, images of FESEM, and DSC results. Seemingly, the agglomeration and distribution of HNTs in the nanocomposite play an important role in the process. The outcome of the analysis could be considered as a very important step towards the reliability of Taguchi method. Keywords: Nanocomposite, Design-of-experiment, Taguchi optimization method, Mechanical properties
Modelling volatility by variance decomposition
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...
Gini estimation under infinite variance
A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)
2018-01-01
textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient
Variance based OFDM frame synchronization
Directory of Open Access Journals (Sweden)
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Energy Technology Data Exchange (ETDEWEB)
Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro
2015-01-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Li, Yang; Pirvu, Traian A
2011-01-01
This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Discrete and continuous time dynamic mean-variance analysis
Reiss, Ariane
1999-01-01
Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...
Discrete time and continuous time dynamic mean-variance analysis
Reiss, Ariane
1999-01-01
Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...
Confidence Interval Approximation For Treatment Variance In ...
African Journals Online (AJOL)
In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...
Shieh, Gwowen; Jan, Show-Li
2015-01-01
The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…
Discussion on variance reduction technique for shielding
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Direct encoding of orientation variance in the visual system.
Norman, Liam J; Heywood, Charles A; Kentridge, Robert W
2015-01-01
Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2010-01-01
This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Comparison of variance estimators for metaanalysis of instrumental variable estimates
Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.
2016-01-01
Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two
Variance-based Salt Body Reconstruction
Ovcharenko, Oleg
2017-05-26
Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.
Minimum variance Monte Carlo importance sampling with parametric dependence
International Nuclear Information System (INIS)
Ragheb, M.M.H.; Halton, J.; Maynard, C.W.
1981-01-01
An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de
Integrating mean and variance heterogeneities to identify differentially expressed genes.
Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen
2016-12-06
-wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.
Variance Function Partially Linear Single-Index Models1.
Lian, Heng; Liang, Hua; Carroll, Raymond J
2015-01-01
We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.
Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M
2008-07-23
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.
ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER
SENGUPTA, Jati K.; PARK, Hyung S.
1994-01-01
The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...
Cumulative prospect theory and mean variance analysis. A rigorous comparison
Hens, Thorsten; Mayer, Janos
2012-01-01
We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Phenotypic variance explained by local ancestry in admixed African Americans.
Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N
2015-01-01
We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.
Allowable variance set on left ventricular function parameter
International Nuclear Information System (INIS)
Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin
2010-01-01
Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)
Directory of Open Access Journals (Sweden)
Tayser Sumer Gaaz
2016-11-01
Taguchi and ANOVA approaches. Seemingly, mHNTs has shown its very important role in the resulting product.
The value of travel time variance
DEFF Research Database (Denmark)
Fosgerau, Mogens; Engelson, Leonid
2011-01-01
This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....
Least-squares variance component estimation
Teunissen, P.J.G.; Amiri-Simkooei, A.R.
2007-01-01
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Variance estimation for generalized Cavalieri estimators
Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen
2011-01-01
The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.
No-migration variance petition
International Nuclear Information System (INIS)
1990-03-01
Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program
Energy Technology Data Exchange (ETDEWEB)
Migaszewski, Zdzislaw M. [Pedagogical University, Institute of Chemistry, Geochemistry and the Environment Div., ul. Checinska 5, 25-020 Kielce (Poland)]. E-mail: zmig@pu.kielce.pl; Galuszka, Agnieszka [Pedagogical University, Institute of Chemistry, Geochemistry and the Environment Div., ul. Checinska 5, 25-020 Kielce (Poland); Paslaski, Piotr [Central Chemical Laboratory of the Polish Geological Institute, ul. Rakowiecka 4, 00-975 Warsaw (Poland)
2005-01-01
This report presents an assessment of chemical variability in natural ecosystems of Wigierski National Park (NE Poland) derived from the calculation of geochemical baselines using a barbell cluster ANOVA design. This method enabled us to obtain statistically valid information with a minimum number of samples collected. Results of summary statistics are presented for elemental concentrations in the soil horizons-O (Ol + Ofh), -A and -B, 1- and 2-year old Pinus sylvestris L. (Scots pine) needles, pine bark and Hypogymnia physodes (L.) Nyl. (lichen) thalli, as well as pH and TOC. The scope of this study also encompassed S and C stable isotope determinations and SEM examinations on Scots pine needles. The variability for S and trace metals in soils and plant bioindicators is primarily governed by parent material lithology and to a lesser extent by anthropogenic factors. This fact enabled us to study concentrations that are close to regional background levels. - The barbell cluster ANOVA design allowed the number of samples collected to be reduced to a minimum.
Liu, Yan; Salvendy, Gavriel
2009-05-01
This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
Dynamic Mean-Variance Asset Allocation
Basak, Suleyman; Chabakauri, Georgy
2009-01-01
Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...
CMB-S4 and the hemispherical variance anomaly
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
The Variance Composition of Firm Growth Rates
Directory of Open Access Journals (Sweden)
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
Directory of Open Access Journals (Sweden)
João Batista Duarte
2001-09-01
Full Text Available O objetivo do trabalho foi comparar, por meio de simulação, as estimativas de componentes de variância produzidas pelos métodos ANOVA (análise da variância, ML (máxima verossimilhança, REML (máxima verossimilhança restrita e MIVQUE(0 (estimador quadrático não viesado de variância mínima, no delineamento de blocos aumentados com tratamentos adicionais (progênies de uma ou mais procedências (cruzamentos. Os resultados indicaram superioridade relativa do método MIVQUE(0. O método ANOVA, embora não tendencioso, apresentou as estimativas de menor precisão. Os métodos de máxima verossimilhança, sobretudo ML, tenderam a subestimar a variância do erro experimental ( e a superestimar as variâncias genotípicas (, em especial nos experimentos de menor tamanho (n/>0,5. Contudo, o método produziu as piores estimativas de variâncias genotípicas quando as progênies vieram de diferentes cruzamentos e os experimentos foram pequenos.This work compares by simulation estimates of variance components produced by the ANOVA (analysis of variance, ML (maximum likelihood, REML (restricted maximum likelihood, and MIVQUE(0 (minimum variance quadratic unbiased estimator methods for augmented block design with additional treatments (progenies stemming from one or more origins (crosses. Results showed the superiority of the MIVQUE(0 estimation. The ANOVA method, although unbiased, showed estimates with lower precision. The ML and REML methods produced downwards biased estimates for error variance (, and upwards biased estimates for genotypic variances (, particularly the ML method. Biases for the REML estimation became negligible when progenies were derived from a single cross, and experiments were of larger size with ratios />0.5. This method, however, provided the worst estimates for genotypic variances when progenies were derived from several crosses and the experiments were of small size (n<120 observations.
[Analysis of variance of repeated data measured by water maze with SPSS].
Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang
2007-01-01
To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (PSPSS statistical package is available to fulfil this process.
Genetic and environmental variance in content dimensions of the MMPI.
Rose, R J
1988-08-01
To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.
Pricing perpetual American options under multiscale stochastic elasticity of variance
International Nuclear Information System (INIS)
Yoon, Ji-Hun
2015-01-01
Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk
Variance estimation for sensitivity analysis of poverty and inequality measures
Directory of Open Access Journals (Sweden)
Christian Dudel
2017-04-01
Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.
Asymptotic variance of grey-scale surface area estimators
DEFF Research Database (Denmark)
Svane, Anne Marie
Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....
Vertical velocity variances and Reynold stresses at Brookhaven
DEFF Research Database (Denmark)
Busch, Niels E.; Brown, R.M.; Frizzola, J.A.
1970-01-01
Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation
Integrating Variances into an Analytical Database
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Decomposition of Variance for Spatial Cox Processes.
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-03-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.
Variance in binary stellar population synthesis
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Variance components for body weight in Japanese quails (Coturnix japonica
Directory of Open Access Journals (Sweden)
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
2010-07-01
...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...
78 FR 14122 - Revocation of Permanent Variances
2013-03-04
... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...
Variance Risk Premia on Stocks and Bonds
DEFF Research Database (Denmark)
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...
Biological Variance in Agricultural Products. Theoretical Considerations
Tijskens, L.M.M.; Konopacki, P.
2003-01-01
The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Variance Swap Replication: Discrete or Continuous?
Directory of Open Access Journals (Sweden)
Fabien Le Floc’h
2018-02-01
Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.
Zero-intelligence realized variance estimation
Gatheral, J.; Oomen, R.C.A.
2010-01-01
Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
A note on minimum-variance theory and beyond
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
A note on minimum-variance theory and beyond
International Nuclear Information System (INIS)
Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello
2004-01-01
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this
Variance in exposed perturbations impairs retention of visuomotor adaptation.
Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel
2017-11-01
Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Directory of Open Access Journals (Sweden)
Hui-qiang Ma
2014-01-01
Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.
Hydrograph variances over different timescales in hydropower production networks
Zmijewski, Nicholas; Wörman, Anders
2016-08-01
The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.
Variance in parametric images: direct estimation from parametric projections
International Nuclear Information System (INIS)
Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.
2000-01-01
Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)
Decomposition of variance in terms of conditional means
Directory of Open Access Journals (Sweden)
Alessandro Figà Talamanca
2013-05-01
Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..
NLO error propagation exercise: statistical results
International Nuclear Information System (INIS)
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods
R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.
Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil
2011-01-01
We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.
Realized Variance and Market Microstructure Noise
DEFF Research Database (Denmark)
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
The Theory of Variances in Equilibrium Reconstruction
International Nuclear Information System (INIS)
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-01
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Variance analysis refines overhead cost control.
Cooper, J C; Suver, J D
1992-02-01
Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.
Institute of Scientific and Technical Information of China (English)
Li Shu; Zhuo Jiashou; Ren Qingwen
2000-01-01
In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.
The Genealogical Consequences of Fecundity Variance Polymorphism
Taylor, Jesse E.
2009-01-01
The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628
A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik
2014-01-01
, the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...
Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans
Raju, C.; Vidya, R.
2016-06-01
In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.
Optimal control of LQG problem with an explicit trade-off between mean and variance
Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang
2011-12-01
For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.
A class of multi-period semi-variance portfolio for petroleum exploration and development
Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei
2012-10-01
Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...
Mean-Variance Analysis in a Multiperiod Setting
Frauendorfer, Karl; Siede, Heiko
1997-01-01
Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...
Improved estimation of the variance in Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard
2008-01-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Improved estimation of the variance in Monte Carlo criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)
2008-07-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Visual SLAM Using Variance Grid Maps
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
. In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...
PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS
Directory of Open Access Journals (Sweden)
Daniel Menezes Cavalcante
2016-07-01
Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.
Response variance in functional maps: neural darwinism revisited.
Directory of Open Access Journals (Sweden)
Hirokazu Takahashi
Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena
2006-10-01
Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.
A general transform for variance reduction in Monte Carlo simulations
International Nuclear Information System (INIS)
Becker, T.L.; Larsen, E.W.
2011-01-01
This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)
Variance-based sensitivity analysis for wastewater treatment plant modelling.
Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B
2014-02-01
Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
A zero-variance-based scheme for variance reduction in Monte Carlo criticality
Energy Technology Data Exchange (ETDEWEB)
Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)
2006-07-01
A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)
A zero-variance-based scheme for variance reduction in Monte Carlo criticality
International Nuclear Information System (INIS)
Christoforou, S.; Hoogenboom, J. E.
2006-01-01
A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)
Energy Technology Data Exchange (ETDEWEB)
Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)
1969-04-15
One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)
Power Estimation in Multivariate Analysis of Variance
Directory of Open Access Journals (Sweden)
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
Variance Risk Premia on Stocks and Bonds
DEFF Research Database (Denmark)
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
We study equity (EVRP) and Treasury variance risk premia (TVRP) jointly and document a number of findings: First, relative to their volatility, TVRP are comparable in magnitude to EVRP. Second, while there is mild positive co-movement between EVRP and TVRP unconditionally, time series estimates...... equity returns for horizons up to 6-months, long maturity TVRP contain robust information for long run equity returns. Finally, exploiting the dynamics of real and nominal Treasuries we document that short maturity break-even rates are a power determinant of the joint dynamics of EVRP, TVRP and their co-movement...... of correlation display distinct spikes in both directions and have been notably volatile since the financial crisis. Third $(i)$ short maturity TVRP predict excess returns on short maturity bonds; $(ii)$ long maturity TVRP and EVRP predict excess returns on long maturity bonds; and $(iii)$ while EVRP predict...
A proxy for variance in dense matching over homogeneous terrain
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
International Nuclear Information System (INIS)
Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars
2012-01-01
Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.
Hybrid biasing approaches for global variance reduction
International Nuclear Information System (INIS)
Wu, Zeyun; Abdel-Khalik, Hany S.
2013-01-01
A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.
Estimation of noise-free variance to measure heterogeneity.
Directory of Open Access Journals (Sweden)
Tilo Winkler
Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.
Directory of Open Access Journals (Sweden)
Adelson Paulo Araújo
2003-01-01
Full Text Available Plant growth analysis presents difficulties related to statistical comparison of growth rates, and the analysis of variance of primary data could guide the interpretation of results. The objective of this work was to evaluate the analysis of variance of data from distinct harvests of an experiment, focusing especially on the homogeneity of variances and the choice of an adequate ANOVA model. Data from five experiments covering different crops and growth conditions were used. From the total number of variables, 19% were originally homoscedastic, 60% became homoscedastic after logarithmic transformation, and 21% remained heteroscedastic after transformation. Data transformation did not affect the F test in one experiment, whereas in the other experiments transformation modified the F test usually reducing the number of significant effects. Even when transformation has not altered the F test, mean comparisons led to divergent interpretations. The mixed ANOVA model, considering harvest as a random effect, reduced the number of significant effects of every factor which had the F test modified by this model. Examples illustrated that analysis of variance of primary variables provides a tool for identifying significant differences in growth rates. The analysis of variance imposes restrictions to experimental design thereby eliminating some advantages of the functional growth analysis.A análise de crescimento vegetal apresenta dificuldades relacionadas à comparação estatística das curvas de crescimento, e a análise de variância dos dados primários pode orientar a interpretação dos resultados. Este trabalho objetivou avaliar a análise de variância de dados de distintas coletas de um experimento, abordando particularmente a homogeneidade das variâncias e a escolha do modelo adequado de ANOVA. Foram utilizados dados de cinco experimentos com diferentes culturas e condições de crescimento. Do total de variáveis, 19% foram originalmente
On the noise variance of a digital mammography system
International Nuclear Information System (INIS)
Burgess, Arthur
2004-01-01
A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel
76 FR 78698 - Proposed Revocation of Permanent Variances
2011-12-19
... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...
variance components and genetic parameters for live weight
African Journals Online (AJOL)
admin
Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.
Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov
2016-01-01
Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002
Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov
2015-10-01
Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.
Analysis of force variance for a continuous miner drum using the Design of Experiments method
Energy Technology Data Exchange (ETDEWEB)
S. Somanchi; V.J. Kecojevic; C.J. Bise [Pennsylvania State University, University Park, PA (United States)
2006-06-15
Continuous miners (CMs) are excavating machines designed to extract a variety of minerals by underground mining. The variance in force experienced by the cutting drum is a very important aspect that must be considered during drum design. A uniform variance essentially means that an equal load is applied on the individual cutting bits and this, in turn, enables better cutting action, greater efficiency, and longer bit and machine life. There are certain input parameters used in the drum design whose exact relationships with force variance are not clearly understood. This paper determines (1) the factors that have a significant effect on the force variance of the drum and (2) the values that can be assigned to these factors to minimize the force variance. A computer program, Continuous Miner Drum (CMD), was developed in collaboration with Kennametal, Inc. to facilitate the mechanical design of CM drums. CMD also facilitated data collection for determining significant factors affecting force variance. Six input parameters, including centre pitch, outer pitch, balance angle, shift angle, set angle and relative angle were tested at two levels. Trials were configured using the Design of Experiments (DoE) method where 2{sup 6} full-factorial experimental design was selected to investigate the effect of these factors on force variance. Results from the analysis show that all parameters except balance angle, as well as their interactions, significantly affect the force variance.
Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).
Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars
2013-10-17
Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro
The Distribution of the Sample Minimum-Variance Frontier
Raymond Kan; Daniel R. Smith
2008-01-01
In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...
Directory of Open Access Journals (Sweden)
Salabura Piotr
2017-01-01
Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.
Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability
DEFF Research Database (Denmark)
Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco
We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....
Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging
Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-01-01
One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...
Diffusion-Based Trajectory Observers with Variance Constraints
DEFF Research Database (Denmark)
Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo
Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...
Variational Variance Reduction for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
An Empirical Temperature Variance Source Model in Heated Jets
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
Hidden temporal order unveiled in stock market volatility variance
Directory of Open Access Journals (Sweden)
Y. Shapira
2011-06-01
Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-09-21
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Explicit formulas for the variance of discounted life-cycle cost
International Nuclear Information System (INIS)
Noortwijk, Jan M. van
2003-01-01
In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples
Thermospheric mass density model error variance as a function of time scale
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Regional sensitivity analysis using revised mean and variance ratio functions
International Nuclear Information System (INIS)
Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen
2014-01-01
The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure
Worldwide variance in the potential utilization of Gamma Knife radiosurgery.
Hamilton, Travis; Dade Lunsford, L
2016-12-01
OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.
Waste Isolation Pilot Plant no-migration variance petition
International Nuclear Information System (INIS)
1990-01-01
Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ''no-migration'' demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation
DEFF Research Database (Denmark)
Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel
2011-01-01
of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...
Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.
Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan
2013-01-01
Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.
Assessment of ulnar variance: a radiological investigation in a Dutch population
Energy Technology Data Exchange (ETDEWEB)
Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)
2001-11-01
Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)
Genetic control of residual variance of yearling weight in Nellore beef cattle.
Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R
2017-04-01
There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting
Directory of Open Access Journals (Sweden)
N.S. Mohan
2010-09-01
Full Text Available Polymer-based composite material possesses superior properties such as high strength-to-weight ratio, stiffness-to-weight ratio and good corrosive resistance and therefore, is attractive for high performance applications such as in aerospace, defense and sport goods industries. Drilling is one of the indispensable methods for building products with composite panels. Surface quality and dimensional accuracy play an important role in the performance of a machined component. In machining processes, however, the quality of the component is greatly influenced by the cutting conditions, tool geometry, tool material, machining process, chip formation, work piece material, tool wear and vibration during cutting. Drilling tests were conducted on glass fiber reinforced plastic composite [GFRP] laminates using an instrumented CNC milling center. A series of experiments are conducted using TRIAC VMC CNC machining center to correlate the cutting parameters and material parameters on the cutting thrust, torque and surface roughness. The measured results were collected and analyzed with the help of the commercial software packages MINITAB14 and Taly Profile. The surface roughness of the drilled holes was measured using Rank Taylor Hobson Surtronic 3+ instrument. The method could be useful in predicting thrust, torque and surface roughness parameters as a function of process variables. The main objective is to optimize the process parameters to achieve low cutting thrust, torque and good surface roughness. From the analysis it is evident that among all the significant parameters, speed and drill size have significant influence cutting thrust and drill size and specimen thickness on the torque and surface roughness. It was also found that feed rate does not have significant influence on the characteristic output of the drilling process.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Angulo-Molina, Aracely
2017-04-01
In this paper a new methodology to detect and differentiate melanoma cells from normal cells through 1D-signatures averaged variances calculated with a binary mask is presented. The sample images were obtained from histological sections of mice melanoma tumor of 4 [Formula: see text] in thickness and contrasted with normal cells. The results show that melanoma cells present a well-defined range of averaged variances values obtained from the signatures in the four conditions used.
Variance swap payoffs, risk premia and extreme market conditions
DEFF Research Database (Denmark)
Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco
This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....
Towards a mathematical foundation of minimum-variance theory
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)
2002-08-30
The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)
International Nuclear Information System (INIS)
Al-Hadeethi, Farqad; Al-Nimr, Moh'd; Al-Safadi, Mohammad
2015-01-01
The performance of PEM (proton exchange membrane) fuel cell was experimentally investigated at three temperatures (30, 50 and 70 °C), four flow rates (5, 10, 15 and 20 ml/min) and two flow patterns (co-current and counter current) in order to generate two correlations using multiple regression analysis with respect to ANOVA. Results revealed that increasing the temperature for co-current and counter current flow patterns will increase both hydrogen and oxygen diffusivities, water management and membrane conductivity. The derived mathematical correlations and three dimensional mapping (i.e. surface response) for the co-current and countercurrent flow patterns showed that there is a clear interaction among the various variables (temperatures and flow rates). - Highlights: • Generating mathematical correlations using multiple regression analysis with respect to ANOVA for the performance of the PEM fuel cell. • Using the 3D mapping to diagnose the optimum performance of the PEM fuel cell at the given operating conditions. • Results revealed that increasing the flow rate had direct influence on the consumption of oxygen. • Results assured that increasing the temperature in co-current and counter current flow patterns increases the performance of PEM fuel cell.
RR-Interval variance of electrocardiogram for atrial fibrillation detection
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Energy Technology Data Exchange (ETDEWEB)
Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
International Nuclear Information System (INIS)
Ankirchner, Stefan; Dermoune, Azzouz
2011-01-01
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.
Yang, Ye; Christensen, Ole F; Sorensen, Daniel
2011-02-01
Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.
Variance analysis of forecasted streamflow maxima in a wet temperate climate
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data
Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.
The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.
Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.
Weaver, Bruce; Black, Ryan A
2015-06-01
Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.
Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes
DEFF Research Database (Denmark)
Højgaard, Bjarne; Vigna, Elena
We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....
The variance of the locally measured Hubble parameter explained with different estimators
DEFF Research Database (Denmark)
Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob
2017-01-01
We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...
Directory of Open Access Journals (Sweden)
Carlos Gervasoni
2010-01-01
Full Text Available This paper presents an expert-based operationalization strategy to measure the degree of democracy in the Argentine provinces. Starting with a mainstream and “thick” definition of regime type, I assess each of its aspects using a subjective or perception-based approach that taps the knowledge of experts on the politics of each province. I present and justify the methodological design of the resulting Survey of Experts on Provincial Politics (SEPP and conduct a preliminary analysis of its results. Some aspects of the provincial regimes appear to be clearly democratic, while others are mixed or even leaning towards authoritarianism. Moreover, some show little interprovincial variance, while others vary considerably from province to province. An analysis of the central tendency and dispersion of the survey items allows for a general description of the Argentine provincial regimes. Inclusion is the most democratic dimension, while the effectiveness of institutional constraints on the power of the Executive is the most deficient. Electoral contestation is generally free of traditional forms of fraud, but incumbents often command far more campaign resources and media attention than do their challengers. Physical repression is rare, but opponents in some provinces face subtler forms of punishment. While the survey does not uncover any clear cases of subnational authoritarianism, stricto sensu, provincial regimes do vary significantly from basically democratic to clearly hybrid. Este artículo presenta una estrategia de operacionalización basada en expertos para medir el grado de democracia en las provincias argentinas. Partiendo de una definición convencional y “densa” del tipo de régimen, se evalúan cada uno de sus aspectos usando un enfoque subjetivo o basado en percepciones, que explota el conocimiento de expertos en la política de cada provincia. Se presenta y justifica el diseño metodológico de la resultante Encuesta de Expertos en
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Directory of Open Access Journals (Sweden)
Ling Huang
2017-02-01
Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the
Mixed emotions: Sensitivity to facial variance in a crowd of faces.
Haberman, Jason; Lee, Pegan; Whitney, David
2015-01-01
The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
An elementary components of variance analysis for multi-center quality control
International Nuclear Information System (INIS)
Munson, P.J.; Rodbard, D.
1977-01-01
The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
Interdependence of NAFTA capital markets: A minimum variance portfolio approach
Directory of Open Access Journals (Sweden)
López-Herrera Francisco
2014-01-01
Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.
Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model
Deng, Guang-Feng; Lin, Woo-Tsong
This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.
Minimum variance linear unbiased estimators of loss and inventory
International Nuclear Information System (INIS)
Stewart, K.B.
1977-01-01
The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs
Chen, Jie; Hu, Jiangnan
2017-06-01
Industry 4.0 and lean production has become the focus of manufacturing. A current issue is to analyse the performance of the assembly line balancing. This study focus on distinguishing the factors influencing the assembly line balancing. The one-way ANOVA method is applied to explore the significant degree of distinguished factors. And regression model is built to find key points. The maximal task time (tmax ), the quantity of tasks (n), and degree of convergence of precedence graph (conv) are critical for the performance of assembly line balancing. The conclusion will do a favor to the lean production in the manufacturing.
ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE
Abdurakhman, Abdurakhman
2008-01-01
Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...
Capturing option anomalies with a variance-dependent pricing kernel
Christoffersen, P.; Heston, S.; Jacobs, K.
2013-01-01
We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is
Realized range-based estimation of integrated variance
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
2007-01-01
We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...
Diagnostic checking in linear processes with infinit variance
Krämer, Walter; Runde, Ralf
1998-01-01
We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.
Evaluation of Mean and Variance Integrals without Integration
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
Adjustment of heterogenous variances and a calving year effect in ...
African Journals Online (AJOL)
Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.
Beyond the Mean: Sensitivities of the Variance of Population Growth.
Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad
2013-03-01
Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.
Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.
Diaz, S Anaid; Viney, Mark
2014-06-01
Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
42 CFR 456.522 - Content of request for variance.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...
29 CFR 1905.5 - Effect of variances.
2010-07-01
...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...
29 CFR 1904.38 - Variances from the recordkeeping rule.
2010-07-01
..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.
Thompson, William Hedley; Fransson, Peter
2016-12-01
Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.
Origin and consequences of the relationship between protein mean and variance.
Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David
2014-01-01
Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Feynman variance-to-mean method
International Nuclear Information System (INIS)
Dowdy, E.J.; Hansen, G.E.; Robba, A.A.
1985-01-01
The Feynman and other fluctuation techniques have been shown to be useful for determining the multiplication of subcritical systems. The moments of the counting distribution from neutron detectors is analyzed to yield the multiplication value. The authors present the methodology and some selected applications and results and comparisons with Monte Carlo calculations
MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,
developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Estimating High-Frequency Based (Co-) Variances: A Unified Approach
DEFF Research Database (Denmark)
Voev, Valeri; Nolte, Ingmar
We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...
Syntactic Variance and Priming Effects in Translation
DEFF Research Database (Denmark)
Bangalore, Srinivas; Behrens, Bergljot; Carl, Michael
2016-01-01
The present work investigates the relationship between syntactic variation and priming in translation. It is based on the claim that languages share a common cognitive network of neural activity. When the source and target languages are solicited in a translation context, this shared network can...... lead to facilitation effects, so-called priming effects. We suggest that priming is a default setting in translation, a special case of language use where source and target languages are constantly co-activated. Such priming effects are not restricted to lexical elements, but do also occur...... on the syntactic level. We tested these hypotheses with translation data from the TPR database, more specifically for three language pairs (English-German, English-Danish, and English-Spanish). Our results show that response times are shorter when syntactic structures are shared. The model explains this through...
Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances
Deng, Wei Q; Asma, Senay; Paré, Guillaume
2014-01-01
Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533
Prediction-error variance in Bayesian model updating: a comparative study
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model
International Nuclear Information System (INIS)
Chen Guanghong; Zambelli, Joseph; Li Ke; Bevins, Nicholas; Qi Zhihua
2011-01-01
Purpose: The noise variance versus spatial resolution relationship in differential phase contrast (DPC) projection imaging and computed tomography (CT) are derived and compared to conventional absorption-based x-ray projection imaging and CT. Methods: The scaling law for DPC-CT is theoretically derived and subsequently validated with phantom results from an experimental Talbot-Lau interferometer system. Results: For the DPC imaging method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both in theory and experimental results, in DPC-CT the noise variance scales with spatial resolution following an inverse linear relationship with fixed slice thickness. Conclusions: The scaling law in DPC-CT implies a lesser noise, and therefore dose, penalty for moving to higher spatial resolutions when compared to conventional absorption-based CT in order to maintain the same contrast-to-noise ratio.
Nordin, Norfarah; Samsudin, Mohd Ali; Hadi Harun, Abdul
2017-01-01
This research aimed to investigate whether online problem based learning (PBL) approach to teach renewable energy topic improves students’ behaviour towards energy conservation. A renewable energy online problem based learning (REePBaL) instruction package was developed based on the theory of constructivism and adaptation of the online learning model. This study employed a single group quasi-experimental design to ascertain the changed in students’ behaviour towards energy conservation after underwent the intervention. The study involved 48 secondary school students in a Malaysian public school. ANOVA Repeated Measure technique was employed in order to compare scores of students’ behaviour towards energy conservation before and after the intervention. Based on the finding, students’ behaviour towards energy conservation improved after the intervention.
Directory of Open Access Journals (Sweden)
Gumieniczek Anna
2018-03-01
Full Text Available It is well known that drugs can directly react with excipients. In addition, excipients can be a source of impurities that either directly react with drugs or catalyze their degradation. Thus, binary mixtures of three diuretics, torasemide, furosemide and amiloride with different excipients, i.e. citric acid anhydrous, povidone K25 (PVP, magnesium stearate (Mg stearate, lactose, D-mannitol, glycine, calcium hydrogen phosphate anhydrous (CaHPO4 and starch, were examined to detect interactions. High temperature and humidity or UV/VIS irradiation were applied as stressing conditions. Differential scanning calorimetry (DSC, FT-IR and NIR were used to adequately collect information. In addition, chemometric assessments of NIR signals with principal component analysis (PCA and ANOVA were applied.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....
Host nutrition alters the variance in parasite transmission potential.
Vale, Pedro F; Choisy, Marc; Little, Tom J
2013-04-23
The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.
Advanced methods of analysis variance on scenarios of nuclear prospective
International Nuclear Information System (INIS)
Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.
2011-01-01
Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Heritability, variance components and genetic advance of some ...
African Journals Online (AJOL)
Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.
Variance estimation in the analysis of microarray data
Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.
2009-01-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing
2017-01-01
Several talent development programs in youth soccer have implemented motor diagnostics measuring performance factors. However, the predictive value of such tests for adult success is a controversial topic in talent research. This prospective cohort study evaluated the long-term predictive value of 1) motor tests and 2) players’ speed abilities (SA) and technical skills (TS) in early adolescence. The sample consisted of 14,178 U12 players from the German talent development program. Five tests (sprint, agility, dribbling, ball control, shooting) were conducted and players’ height, weight as well as relative age were assessed at nationwide diagnostics between 2004 and 2006. In the 2014/15 season, the players were then categorized as professional (n = 89), semi-professional (n = 913), or non-professional players (n = 13,176), indicating their adult performance level (APL). The motor tests’ prognostic relevance was determined using ANOVAs. Players’ future success was predicted by a logistic regression threshold model. This structural equation model comprised a measurement model with the motor tests and two correlated latent factors, SA and TS, with simultaneous consideration for the manifest covariates height, weight and relative age. Each motor predictor and anthropometric characteristic discriminated significantly between the APL (p < .001; η2 ≤ .02). The threshold model significantly predicted the APL (R2 = 24.8%), and in early adolescence the factor TS (p < .001) seems to have a stronger effect on adult performance than SA (p < .05). Both approaches (ANOVA, SEM) verified the diagnostics’ predictive validity over a long-term period (≈ 9 years). However, because of the limited effect sizes, the motor tests’ prognostic relevance remains ambiguous. A challenge for future research lies in the integration of different (e.g., person-oriented or multilevel) multivariate approaches that expand beyond the “traditional” topic of single tests’ predictive
Röring, Johan
2017-01-01
Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...
Towards the ultimate variance-conserving convection scheme
International Nuclear Information System (INIS)
Os, J.J.A.M. van; Uittenbogaard, R.E.
2004-01-01
In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287
Problems of variance reduction in the simulation of random variables
International Nuclear Information System (INIS)
Lessi, O.
1987-01-01
The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
2008-12-01
slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that
Temperature variance study in Monte-Carlo photon transport theory
International Nuclear Information System (INIS)
Giorla, J.
1985-10-01
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr
Mean-Variance Optimization in Markov Decision Processes
Mannor, Shie; Tsitsiklis, John N.
2011-01-01
We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.
The asymptotic variance of departures in critically loaded queues
Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.
2011-01-01
We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +
Variance of a potential of mean force obtained using the weighted histogram analysis method.
Cukier, Robert I
2013-11-27
A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.
An elementary components of variance analysis for multi-centre quality control
International Nuclear Information System (INIS)
Munson, P.J.; Rodbard, D.
1978-01-01
The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean
Using variance structure to quantify responses to perturbation in fish catches
Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.
2017-01-01
We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.
Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''
International Nuclear Information System (INIS)
Nasseri, K.K.
1987-07-01
Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Approximate zero-variance Monte Carlo estimation of Markovian unreliability
International Nuclear Information System (INIS)
Delcoux, J.L.; Labeau, P.E.; Devooght, J.
1997-01-01
Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)
A versatile omnibus test for detecting mean and variance heterogeneity.
Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J
2014-01-01
Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.
Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'
International Nuclear Information System (INIS)
Nasseri, K.K.
1987-01-01
Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined
Mean-Variance Portfolio Selection with Margin Requirements
Directory of Open Access Journals (Sweden)
Yuan Zhou
2013-01-01
Full Text Available We study the continuous-time mean-variance portfolio selection problem in the situation when investors must pay margin for short selling. The problem is essentially a nonlinear stochastic optimal control problem because the coefficients of positive and negative parts of control variables are different. We can not apply the results of stochastic linearquadratic (LQ problem. Also the solution of corresponding Hamilton-Jacobi-Bellman (HJB equation is not smooth. Li et al. (2002 studied the case when short selling is prohibited; therefore they only need to consider the positive part of control variables, whereas we need to handle both the positive part and the negative part of control variables. The main difficulty is that the positive part and the negative part are not independent. The previous results are not directly applicable. By decomposing the problem into several subproblems we figure out the solutions of HJB equation in two disjoint regions and then prove it is the viscosity solution of HJB equation. Finally we formulate solution of optimal portfolio and the efficient frontier. We also present two examples showing how different margin rates affect the optimal solutions and the efficient frontier.
29 CFR 4204.21 - Requests to PBGC for variances and exemptions.
2010-07-01
... WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Procedures for Individual and Class... parties. When a contributing employer withdraws from a plan as a result of related sales of assets involving several purchasers, or withdraws from more than one plan as a result of a single sale, the...
Energy Technology Data Exchange (ETDEWEB)
Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.
2014-06-01
The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)
Allowing variance may enlarge the safe operating space for exploited ecosystems.
Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten
2015-11-17
Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.
Variability of indoor and outdoor VOC measurements: An analysis using variance components
International Nuclear Information System (INIS)
Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.
2012-01-01
This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.
Ivarsdottir, Erna V; Steinthorsdottir, Valgerdur; Daneshpour, Maryam S; Thorleifsson, Gudmar; Sulem, Patrick; Holm, Hilma; Sigurdsson, Snaevar; Hreidarsson, Astradur B; Sigurdsson, Gunnar; Bjarnason, Ragnar; Thorsson, Arni V; Benediktsson, Rafn; Eyjolfsson, Gudmundur; Sigurdardottir, Olof; Olafsson, Isleifur; Zeinali, Sirous; Azizi, Fereidoun; Thorsteinsdottir, Unnur; Gudbjartsson, Daniel F; Stefansson, Kari
2017-09-01
Sequence variants that affect mean fasting glucose levels do not necessarily affect risk for type 2 diabetes (T2D). We assessed the effects of 36 reported glucose-associated sequence variants on between- and within-subject variance in fasting glucose levels in 69,142 Icelanders. The variant in TCF7L2 that increases fasting glucose levels increases between-subject variance (5.7% per allele, P = 4.2 × 10 -10 ), whereas variants in GCK and G6PC2 that increase fasting glucose levels decrease between-subject variance (7.5% per allele, P = 4.9 × 10 -11 and 7.3% per allele, P = 7.5 × 10 -18 , respectively). Variants that increase mean and between-subject variance in fasting glucose levels tend to increase T2D risk, whereas those that increase the mean but reduce variance do not (r 2 = 0.61). The variants that increase between-subject variance increase fasting glucose heritability estimates. Intuitively, our results show that increasing the mean and variance of glucose levels is more likely to cause pathologically high glucose levels than increase in the mean offset by a decrease in variance.
Improving computational efficiency of Monte Carlo simulations with variance reduction
International Nuclear Information System (INIS)
Turner, A.; Davis, A.
2013-01-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
A pattern recognition approach to transistor array parameter variance
da F. Costa, Luciano; Silva, Filipi N.; Comin, Cesar H.
2018-06-01
The properties of semiconductor devices, including bipolar junction transistors (BJTs), are known to vary substantially in terms of their parameters. In this work, an experimental approach, including pattern recognition concepts and methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), was used to experimentally investigate the variation among BJTs belonging to integrated circuits known as transistor arrays. It was shown that a good deal of the devices variance can be captured using only two PCA axes. It was also verified that, though substantially small variation of parameters is observed for BJT from the same array, larger variation arises between BJTs from distinct arrays, suggesting the consideration of device characteristics in more critical analog designs. As a consequence of its supervised nature, LDA was able to provide a substantial separation of the BJT into clusters, corresponding to each transistor array. In addition, the LDA mapping into two dimensions revealed a clear relationship between the considered measurements. Interestingly, a specific mapping suggested by the PCA, involving the total harmonic distortion variation expressed in terms of the average voltage gain, yielded an even better separation between the transistor array clusters. All in all, this work yielded interesting results from both semiconductor engineering and pattern recognition perspectives.
Coupled bias-variance tradeoff for cross-pose face recognition.
Li, Annan; Shan, Shiguang; Gao, Wen
2012-01-01
Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.
Genetic factors explain half of all variance in serum eosinophil cationic protein
DEFF Research Database (Denmark)
Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie
2014-01-01
with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....
Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.
Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong
2018-03-01
Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.
The effect of sex on the mean and variance of fitness in facultatively sexual rotifers.
Becks, L; Agrawal, A F
2011-03-01
The evolution of sex is a classic problem in evolutionary biology. While this topic has been the focus of much theoretical work, there is a serious dearth of empirical data. A simple yet fundamental question is how sex affects the mean and variance in fitness. Despite its importance to the theory, this type of data is available for only a handful of taxa. Here, we report two experiments in which we measure the effect of sex on the mean and variance in fitness in the monogonont rotifer, Brachionus calyciflorus. Compared to asexually derived offspring, we find that sexual offspring have lower mean fitness and less genetic variance in fitness. These results indicate that, at least in the laboratory, there are both short- and long-term disadvantages associated with sexual reproduction. We briefly review the other available data and highlight the need for future work. © 2010 The Authors. Journal of Evolutionary Biology © 2010 European Society For Evolutionary Biology.
Variance stabilization for computing and comparing grand mean waveforms in MEG and EEG.
Matysiak, Artur; Kordecki, Wojciech; Sielużycki, Cezary; Zacharias, Norman; Heil, Peter; König, Reinhard
2013-07-01
Grand means of time-varying signals (waveforms) across subjects in magnetoencephalography (MEG) and electroencephalography (EEG) are commonly computed as arithmetic averages and compared between conditions, for example, by subtraction. However, the prerequisite for these operations, homogeneity of the variance of the waveforms in time, and for most common parametric statistical tests also between conditions, is rarely met. We suggest that the heteroscedasticity observed instead results because waveforms may differ by factors and additive terms and follow a mixed model. We propose to apply the asinh-transformation to stabilize the variance in such cases. We demonstrate the homogeneous variance and the normal distributions of data achieved by this transformation using simulated waveforms, and we apply it to real MEG data and show its benefits. The asinh-transformation is thus an essential and useful processing step prior to computing and comparing grand mean waveforms in MEG and EEG. Copyright © 2013 Society for Psychophysiological Research.
Variance-to-mean method generalized by linear difference filter technique
International Nuclear Information System (INIS)
Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji
1998-01-01
The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power
A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management
Directory of Open Access Journals (Sweden)
Hui-qiang Ma
2015-01-01
Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.
Estimation variance bounds of importance sampling simulations in digital communication systems
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Estimation of (co)variances for genomic regions of flexible sizes
DEFF Research Database (Denmark)
Sørensen, Lars P; Janss, Luc; Madsen, Per
2012-01-01
was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...
Lebigre, Christophe; Arcese, Peter; Reid, Jane M
2013-07-01
Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
International Nuclear Information System (INIS)
Yu, Zhiyong
2013-01-01
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
Energy Technology Data Exchange (ETDEWEB)
Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)
2013-12-15
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.
Variances as order parameter and complexity measure for random Boolean networks
International Nuclear Information System (INIS)
Luque, Bartolo; Ballesteros, Fernando J; Fernandez, Manuel
2005-01-01
Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems
Variances as order parameter and complexity measure for random Boolean networks
Energy Technology Data Exchange (ETDEWEB)
Luque, Bartolo [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Ballesteros, Fernando J [Observatori Astronomic, Universitat de Valencia, Ed. Instituts d' Investigacio, Pol. La Coma s/n, E-46980 Paterna, Valencia (Spain); Fernandez, Manuel [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain)
2005-02-04
Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems.
OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.
Xie, Xianchao; Kou, S C; Brown, Lawrence
2016-03-01
This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.
Preusse, Peter; Eckermann, Stephen D.; Offermann, Dirk; Jackman, Charles H. (Technical Monitor)
2000-01-01
Gravity wave temperature fluctuations acquired by the CRISTA instrument are compared to previous estimates of zonal-mean gravity wave temperature variance inferred from the LIMS, MLS and GPS/MET satellite instruments during northern winter. Careful attention is paid to the range of vertical wavelengths resolved by each instrument. Good agreement between CRISTA data and previously published results from LIMS, MLS and GPS/MET are found. Key latitudinal features in these variances are consistent with previous findings from ground-based measurements and some simple models. We conclude that all four satellite instruments provide reliable global data on zonal-mean gravity wave temperature fluctuations throughout the middle atmosphere.
Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.
Zapko-Willmes, Alexandra; Kandler, Christian
2018-01-01
The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).
Com aplicar les proves paramètriques bivariades t de Student i ANOVA en SPSS. Cas pràctic
Directory of Open Access Journals (Sweden)
María-José Rubio-Hurtado
2012-07-01
Full Text Available Les proves paramètriques són un tipus de proves de significació estadística que quantifiquen l'associació o independència entre una variable quantitativa i una categòrica. Les proves paramètriques són exigents amb certs requisits previs per a la seva aplicació: la distribució Normal de la variable quantitativa en els grups que es comparen, l'homogeneïtat de variàncies en les poblacions de les quals procedeixen els grups i una n mostral no inferior a 30. El seu no compliment comporta la necessitat de recórrer a proves estadístiques no paramètriques. Les proves paramètriques es classifiquen en dos: prova t (per a una mostra o per a dues mostres relacionades o independents i prova ANOVA (per a més de dues mostres independents.
Estimation of the variance of noise in digital imaging for quality control
International Nuclear Information System (INIS)
Soro Bua, M.; Otero Martinez, C.; Vazquez Vazquez, R.; Santamarina Vazquez, F.; Lobato Busto, R.; Luna Vega, V.; Mosquera Sueiro, J.; Sanchez Garcia, M.; Pombar Camean, M.
2011-01-01
In this work is estimated variance kerma function pixel values for the real response curve nonlinear digital image system, without resorting to any approximation to the behavior of the detector. This result is compared with that obtained for the linearized version of the response curve.
CAIXA: a catalogue of AGN in the XMM-Newton archive. III. Excess variance analysis
Ponti, G.; Papadakis, I.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
Context. We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10 ks in pointed observations, which is the largest sample used so far to study
Impact of Damping Uncertainty on SEA Model Response Variance
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
A new variance stabilizing transformation for gene expression data analysis.
Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor
2013-12-01
In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
International Nuclear Information System (INIS)
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Motor equivalence and structure of variance: multi-muscle postural synergies in Parkinson's disease.
Falaki, Ali; Huang, Xuemei; Lewis, Mechelle M; Latash, Mark L
2017-07-01
We explored posture-stabilizing multi-muscle synergies with two methods of analysis of multi-element, abundant systems: (1) Analysis of inter-cycle variance; and (2) Analysis of motor equivalence, both quantified within the framework of the uncontrolled manifold (UCM) hypothesis. Data collected in two earlier studies of patients with Parkinson's disease (PD) were re-analyzed. One study compared synergies in the space of muscle modes (muscle groups with parallel scaling of activation) during tasks performed by early-stage PD patients and controls. The other study explored the effects of dopaminergic medication on multi-muscle-mode synergies. Inter-cycle variance and absolute magnitude of the center of pressure displacement across consecutive cycles were quantified during voluntary whole-body sway within the UCM and orthogonal to the UCM space. The patients showed smaller indices of variance within the UCM and motor equivalence compared to controls. The indices were also smaller in the off-drug compared to on-drug condition. There were strong across-subject correlations between the inter-cycle variance within/orthogonal to the UCM and motor equivalent/non-motor equivalent displacements. This study has shown that, at least for cyclical tasks, analysis of variance and analysis of motor equivalence lead to metrics of stability that correlate with each other and show similar effects of disease and medication. These results show, for the first time, intimate links between indices of variance and motor equivalence. They suggest that analysis of motor equivalence, which requires only a handful of trials, could be used broadly in the field of motor disorders to analyze problems with action stability.
Studying Variance in the Galactic Ultra-compact Binary Population
Larson, Shane; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Variance of a product with application to uranium estimation
International Nuclear Information System (INIS)
Lowe, V.W.; Waterman, M.S.
1976-01-01
The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables
Variance squeezing and entanglement of the XX central spin model
International Nuclear Information System (INIS)
El-Orany, Faisal A A; Abdalla, M Sebawe
2011-01-01
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Variance squeezing and entanglement of the XX central spin model
Energy Technology Data Exchange (ETDEWEB)
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Improving precision in gel electrophoresis by stepwisely decreasing variance components.
Schröder, Simone; Brandmüller, Asita; Deng, Xi; Ahmed, Aftab; Wätzig, Hermann
2009-10-15
Many methods have been developed in order to increase selectivity and sensitivity in proteome research. However, gel electrophoresis (GE) which is one of the major techniques in this area, is still known for its often unsatisfactory precision. Percental relative standard deviations (RSD%) up to 60% have been reported. In this case the improvement of precision and sensitivity is absolutely essential, particularly for the quality control of biopharmaceuticals. Our work reflects the remarkable and completely irregular changes of the background signal from gel to gel. This irregularity was identified as one of the governing error sources. These background changes can be strongly reduced by using a signal detection in the near-infrared (NIR) range. This particular detection method provides the most sensitive approach for conventional CCB (Colloidal Coomassie Blue) stained gels, which is reflected in a total error of just 5% (RSD%). In order to further investigate variance components in GE, an experimental Plackett-Burman screening design was performed. The influence of seven potential factors on the precision was investigated using 10 proteins with different properties analyzed by NIR detection. The results emphasized the individuality of the proteins. Completely different factors were identified to be significant for each protein. However, out of seven investigated parameters, just four showed a significant effect on some proteins, namely the parameters of: destaining time, staining temperature, changes of detergent additives (SDS and LDS) in the sample buffer, and the age of the gels. As a result, precision can only be improved individually for each protein or protein classes. Further understanding of the unique properties of proteins should enable us to improve the precision in gel electrophoresis.
National Research Council Canada - National Science Library
Bunch, Howard M
1989-01-01
This paper is a presentation of the results of a study conducted at a U.S. Navy shipyard during 1987 concerning the relationship between engineering standards and the variances that were occurring in production budget and charged manhours...
D'Acremont, Mathieu; Bossaerts, Peter
2008-12-01
When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.
Energy Technology Data Exchange (ETDEWEB)
Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)
2011-07-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
International Nuclear Information System (INIS)
Christoforou, Stavros; Hoogenboom, J. Eduard
2011-01-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...
Genetic variance components for residual feed intake and feed ...
African Journals Online (AJOL)
Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...
Cumulative Prospect Theory, Option Returns, and the Variance Premium
Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver
The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the
Gravity interpretation of dipping faults using the variance analysis method
International Nuclear Information System (INIS)
Essa, Khalid S
2013-01-01
A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)
Bounds for Tail Probabilities of the Sample Variance
Directory of Open Access Journals (Sweden)
Van Zuijlen M
2009-01-01
Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.
Robust estimation of the noise variance from background MR data
Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.
2006-01-01
In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum
Stable limits for sums of dependent infinite variance random variables
DEFF Research Database (Denmark)
Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas
2011-01-01
The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these...
Computing the Expected Value and Variance of Geometric Measures
DEFF Research Database (Denmark)
Staals, Frank; Tsirogiannis, Constantinos
2017-01-01
distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...
Estimation of the additive and dominance variances in South African ...
African Journals Online (AJOL)
The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate ...
A Visual Model for the Variance and Standard Deviation
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Multidimensional adaptive testing with a minimum error-variance criterion
van der Linden, Willem J.
1997-01-01
The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple
Asymptotics of variance of the lattice point count
Czech Academy of Sciences Publication Activity Database
Janáček, Jiří
2008-01-01
Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008
Estimates of variance components for postweaning feed intake and ...
African Journals Online (AJOL)
Mike
2013-03-09
Mar 9, 2013 ... transformation of RFIp and RDGp to z-scores (mean = 0.0, variance = 1.0) and then ... generation pedigree (n = 9 653) used for this analysis. ..... Nkrumah, J.D., Basarab, J.A., Wang, Z., Li, C., Price, M.A., Okine, E.K., Crews Jr., ...
An observation on the variance of a predicted response in ...
African Journals Online (AJOL)
... these properties and computational simplicity. To avoid over fitting, along with the obvious advantage of having a simpler equation, it is shown that the addition of a variable to a regression equation does not reduce the variance of a predicted response. Key words: Linear regression; Partitioned matrix; Predicted response ...
An entropy approach to size and variance heterogeneity
Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.
2012-01-01
In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity
The Threat of Common Method Variance Bias to Theory Building
Reio, Thomas G., Jr.
2010-01-01
The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…
40 CFR 268.44 - Variance from a treatment standard.
2010-07-01
... complete petition may be requested as needed to send to affected states and Regional Offices. (e) The... provide an opportunity for public comment. The final decision on a variance from a treatment standard will... than) the concentrations necessary to minimize short- and long-term threats to human health and the...
Application of effective variance method for contamination monitor calibration
International Nuclear Information System (INIS)
Goncalez, O.L.; Freitas, I.S.M. de.
1990-01-01
In this report, the calibration of a thin window Geiger-Muller type monitor for alpha superficial contamination is presented. The calibration curve is obtained by the method of the least-squares fitting with effective variance. The method and the approach for the calculation are briefly discussed. (author)
The VIX, the Variance Premium, and Expected Returns
DEFF Research Database (Denmark)
Osterrieder, Daniela Maria; Ventosa-Santaulària, Daniel; Vera-Valdés, Eduardo
2018-01-01
. These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...
Some asymptotic theory for variance function smoothing | Kibua ...
African Journals Online (AJOL)
Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...
Variance-optimal hedging for processes with stationary independent increments
DEFF Research Database (Denmark)
Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.
We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...
Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...
African Journals Online (AJOL)
Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Molecular variance of the Tunisian almond germplasm assessed by ...
African Journals Online (AJOL)
The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...
Starting design for use in variance exchange algorithms | Iwundu ...
African Journals Online (AJOL)
A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...
A Hold-out method to correct PCA variance inflation
DEFF Research Database (Denmark)
Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai
2012-01-01
In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...
Heterogeneity of variance and its implications on dairy cattle breeding
African Journals Online (AJOL)
Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...
Effects of Diversification of Assets on Mean and Variance | Jayeola ...
African Journals Online (AJOL)
Diversification is a means of minimizing risk and maximizing returns by investing in a variety of assets of the portfolio. This paper is written to determine the effects of diversification of three types of Assets; uncorrelated, perfectly correlated and perfectly negatively correlated assets on mean and variance. To go about this, ...
Perspective projection for variance pose face recognition from camera calibration
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
On zero variance Monte Carlo path-stretching schemes
International Nuclear Information System (INIS)
Lux, I.
1983-01-01
A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game
Hedging with stock index futures: downside risk versus the variance
Brouwer, F.; Nat, van der M.
1995-01-01
In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to
The variance quadtree algorithm: use for spatial sampling design
Minasny, B.; McBratney, A.B.; Walvoort, D.J.J.
2007-01-01
Spatial sampling schemes are mainly developed to determine sampling locations that can cover the variation of environmental properties in the area of interest. Here we proposed the variance quadtree algorithm for sampling in an area with prior information represented as ancillary or secondary
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative
Variance component and heritability estimates of early growth traits ...
African Journals Online (AJOL)
as selection criteria for meat production in sheep (Anon, 1970; Olson et ai., 1976;. Lasslo et ai., 1985; Badenhorst et ai., 1991). If these traits are to be included in a breeding programme, accurate estimates of breeding values will be needed to optimize selection programmes. This requires a knowledge of variance and co-.
Variances in consumers prices of selected food Items among ...
African Journals Online (AJOL)
The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...
Age Differences in the Variance of Personality Characteristics
Czech Academy of Sciences Publication Activity Database
Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.
2016-01-01
Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-01-01
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974
Toward a more robust variance-based global sensitivity analysis of model outputs
Energy Technology Data Exchange (ETDEWEB)
Tong, C
2007-10-15
Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.
Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence
Cheminet, Adam; Blanquart, Guillaume
2011-11-01
Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.
Ji, Luyan; Pourtois, Gilles
2018-04-20
We examined the processing capacity and the role of emotion variance in ensemble representation for multiple facial expressions shown concurrently. A standard set size manipulation was used, whereby the sets consisted of 4, 8, or 16 morphed faces each uniquely varying along a happy-angry continuum (Experiment 1) or a neutral-happy/angry continuum (Experiments 2 & 3). Across the three experiments, we reduced the amount of emotion variance in the sets to explore the boundaries of this process. Participants judged the perceived average emotion from each set on a continuous scale. We computed and compared objective and subjective difference scores, using the morph units and post-experiment ratings, respectively. Results of the subjective scores were more consistent than the objective ones across the first two experiments where the variance was relatively large, and revealed each time that increasing set size led to a poorer averaging ability, suggesting capacity limitations in establishing ensemble representations for multiple facial expressions. However, when the emotion variance in the sets was reduced in Experiment 3, both subjective and objective scores remained unaffected by set size, suggesting that the emotion averaging process was unlimited in these conditions. Collectively, these results suggest that extracting mean emotion from a set composed of multiple faces depends on both structural (attentional) and stimulus-related effects. Copyright © 2018 Elsevier Ltd. All rights reserved.
Energy and variance budgets of a diffusive staircase with implications for heat flux scaling
Hieronymus, M.; Carpenter, J. R.
2016-02-01
Diffusive convection, the mode of double-diffusive convection that occur when both temperature and salinity increase with increasing depth, is commonplace throughout the high latitude oceans and diffusive staircases constitute an important heat transport process in the Arctic Ocean. Heat and buoyancy fluxes through these staircases are often estimated using flux laws deduced either from laboratory experiments, or from simplified energy or variance budgets. We have done direct numerical simulations of double-diffusive convection at a range of Rayleigh numbers and quantified the energy and variance budgets in detail. This allows us to compare the fluxes in our simulations to those derived using known flux laws and to quantify how well the simplified energy and variance budgets approximate the full budgets. The fluxes are found to agree well with earlier estimates at high Rayleigh numbers, but we find large deviations at low Rayleigh numbers. The close ties between the heat and buoyancy fluxes and the budgets of thermal variance and energy have been utilized to derive heat flux scaling laws in the field of thermal convection. The result is the so called GL-theory, which has been found to give accurate heat flux scaling laws in a very wide parameter range. Diffusive convection has many similarities to thermal convection and an extension of the GL-theory to diffusive convection is also presented and its predictions are compared to the results from our numerical simulations.
Directory of Open Access Journals (Sweden)
Monika eFleischhauer
2013-09-01
Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to
Variance risk premia in CO_2 markets: A political perspective
International Nuclear Information System (INIS)
Reckling, Dennis
2016-01-01
The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.; Alkhalifah, Tariq Ali
2017-01-01
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Pressley, Joanna; Troyer, Todd W
2011-05-01
The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.
Relative variance of the mean-squared pressure in multimode media: rehabilitating former approaches.
Monsef, Florian; Cozza, Andrea; Rodrigues, Dominique; Cellard, Patrick; Durocher, Jean-Noel
2014-11-01
The commonly accepted model for the relative variance of transmission functions in room acoustics, derived by Weaver, aims at including the effects of correlation between eigenfrequencies. This model is based on an analytical expression of the relative variance derived by means of an approximated correlation function. The relevance of the approximation used for modeling such correlation is questioned here. Weaver's model was motivated by the fact that earlier models derived by Davy and Lyon assumed independent eigenfrequencies and led to an overestimation with respect to relative variances found in practice. It is shown here that this overestimation is due to an inadequate truncation of the modal expansion, and to an improper choice of the frequency range over which ensemble averages of the eigenfrequencies is defined. An alternative definition is proposed, settling the inconsistency; predicted relative variances are found to be in good agreement with experimental data. These results rehabilitate former approaches that were based on independence assumptions between eigenfrequencies. Some former studies showed that simpler correlation models could be used to predict the statistics of some field-related physical quantity at low modal overlap. The present work confirms that this is also the case when dealing with transmission functions.
Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.
Pressley, Joanna; Troyer, Todd W
2009-07-01
In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.
2017-05-26
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
A load factor based mean-variance analysis for fuel diversification
Energy Technology Data Exchange (ETDEWEB)
Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)
2009-03-15
Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)
Stock, Amanda J; Campitelli, Brandon E; Stinchcombe, John R
2014-08-19
Clinal variation is commonly interpreted as evidence of adaptive differentiation, although clines can also be produced by stochastic forces. Understanding whether clines are adaptive therefore requires comparing clinal variation to background patterns of genetic differentiation at presumably neutral markers. Although this approach has frequently been applied to single traits at a time, we have comparatively fewer examples of how multiple correlated traits vary clinally. Here, we characterize multivariate clines in the Ivyleaf morning glory, examining how suites of traits vary with latitude, with the goal of testing for divergence in trait means that would indicate past evolutionary responses. We couple this with analysis of genetic variance in clinally varying traits in 20 populations to test whether past evolutionary responses have depleted genetic variance, or whether genetic variance declines approaching the range margin. We find evidence of clinal differentiation in five quantitative traits, with little evidence of isolation by distance at neutral loci that would suggest non-adaptive or stochastic mechanisms. Within and across populations, the traits that contribute most to population differentiation and clinal trends in the multivariate phenotype are genetically variable as well, suggesting that a lack of genetic variance will not cause absolute evolutionary constraints. Our data are broadly consistent theoretical predictions of polygenic clines in response to shallow environmental gradients. Ecologically, our results are consistent with past findings of natural selection on flowering phenology, presumably due to season-length variation across the range. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores
Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.
2015-12-01
Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture
Empirical single sample quantification of bias and variance in Q-ball imaging.
Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A
2018-02-06
The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2017-01-01
In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...
Measuring kinetics of complex single ion channel data using mean-variance histograms.
Patlak, J B
1993-07-01
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance
Adaptation to Variance of Stimuli in Drosophila Larva Navigation
Wolk, Jason; Gepner, Ruben; Gershow, Marc
In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Compounding approach for univariate time series with nonstationary variances
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Variance inflation in high dimensional Support Vector Machines
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Replica approach to mean-variance portfolio optimization
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Variance reduction methods applied to deep-penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course
Directory of Open Access Journals (Sweden)
Abdulqader Jighly
2018-02-01
Full Text Available Whole genome duplication (WGD is an evolutionary phenomenon, which causes significant changes to genomic structure and trait architecture. In recent years, a number of studies decomposed the additive genetic variance explained by different sets of variants. However, they investigated diploid populations only and none of the studies examined any polyploid organism. In this research, we extended the application of this approach to polyploids, to differentiate the additive variance explained by the three subgenomes and seven sets of homoeologous chromosomes in synthetic allohexaploid wheat (SHW to gain a better understanding of trait evolution after WGD. Our SHW population was generated by crossing improved durum parents (Triticum turgidum; 2n = 4x = 28, AABB subgenomes with the progenitor species Aegilops tauschii (syn Ae. squarrosa, T. tauschii; 2n = 2x = 14, DD subgenome. The population was phenotyped for 10 fungal/nematode resistance traits as well as two abiotic stresses. We showed that the wild D subgenome dominated the additive effect and this dominance affected the A more than the B subgenome. We provide evidence that this dominance was not inflated by population structure, relatedness among individuals or by longer linkage disequilibrium blocks observed in the D subgenome within the population used for this study. The cumulative size of the three homoeologs of the seven chromosomal groups showed a weak but significant positive correlation with their cumulative explained additive variance. Furthermore, an average of 69% for each chromosomal group's cumulative additive variance came from one homoeolog that had the highest explained variance within the group across all 12 traits. We hypothesize that structural and functional changes during diploidization may explain chromosomal group relations as allopolyploids keep balanced dosage for many genes. Our results contribute to a better understanding of trait evolution mechanisms in polyploidy
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
International Nuclear Information System (INIS)
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ''real'' particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ''black box''. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases
Spatial analysis based on variance of moving window averages
Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J
2006-01-01
A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...
Efficient Scores, Variance Decompositions and Monte Carlo Swindles.
1984-08-28
to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem
The mean and variance of phylogenetic diversity under rarefaction
Nipperess, David A.; Matsen, Frederick A.
2013-01-01
Phylogenetic diversity (PD) depends on sampling intensity, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD. We have derived exact formulae for t...
On mean reward variance in semi-Markov processes
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2005-01-01
Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005
Analytic solution to variance optimization with no short positions
Kondor, Imre; Papp, Gábor; Caccioli, Fabio
2017-12-01
We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \
Directory of Open Access Journals (Sweden)
Maria S. Prokhorova
2014-01-01
Full Text Available The article deals with a study of problemsof ﬁnding the optimal portfolio securitiesusing convolutions expectation of portfolioreturns and portfolio variance. Value of thecoefﬁcient of risk, in which the problem ofmaximizing the variance - limited yieldis equivalent to maximizing a linear convolution of criteria for «expected returns-variance» is obtained. An automated method for ﬁnding the optimal portfolio, onthe basis of which the results of the studydemonstrated is proposed.
International Nuclear Information System (INIS)
Melkonyan, S.V.
2012-01-01
The problem of electron mobility variance is discussed. It is established that in equilibrium semiconductors the mobility variance is infinite. It is revealed that the cause of the mobility variance infinity is the threshold of phonon emission. The electron-phonon interaction theory in the presence of an electric field is developed. A new mechanism of electron scattering, called electron-phonon field-induced tunnel (FIT) scattering, is observed. The effect of the electron-phonon FIT scattering is explained in terms of penetration of the electron wave function into the semiconductor band gap in the presence of an electric field. New and more general expressions for the electron-non-polar optical phonon scattering probability and relaxation time are obtained. The results show that FIT transitions have principle meaning for the mobility fluctuation theory: mobility variance becomes finite.
Modality-Driven Classification and Visualization of Ensemble Variance
Energy Technology Data Exchange (ETDEWEB)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
A comparison between temporal and subband minimum variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis
2014-03-01
This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar
Structural changes and out-of-sample prediction of realized range-based variance in the stock market
Gong, Xu; Lin, Boqiang
2018-03-01
This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.
Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee
2016-06-01
Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.
DEFF Research Database (Denmark)
Chown, Steven L.; Jumbam, Keafon R.; Sørensen, Jesper Givskov
2009-01-01
used during assessments of critical thermal limits to activity. To date, the focus of work has almost exclusively been on the effects of rate variation on mean values of the critical limits. 2. If the rate of temperature change used in an experimental trial affects not only the trait mean but also its...... this is the case for critical thermal limits using a population of the model species Drosophila melanogaster and the invasive ant species Linepithema humile. 4. We found that effects of the different rates of temperature change are variable among traits and species. However, in general, different rates...... of temperature change resulted in different phenotypic variances and different estimates of heritability, presuming that genetic variance remains constant. We also found that different rates resulted in different conclusions regarding the responses of the species to acclimation, especially in the case of L...
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
Directory of Open Access Journals (Sweden)
Liyun Zhuang
2017-01-01
Full Text Available This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE, which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE. Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance.
Zhuang, Liyun; Guan, Yepeng
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Causality in variance and the type of traders in crude oil futures
International Nuclear Information System (INIS)
Bhar, Ramaprasad; Hamori, Shigeyuki
2005-01-01
This article examines the causal relationship and, in particular, informational dependence between crude oil futures return and the trading volume using daily data over a ten-year period using a recent econometric methodology. The two-step procedure developed by Cheung and Ng (1996) [Cheung, Y.W., Ng, L.K., 1996. A causality-in-variance test and its applications to financial market prices, Journal of Econometrics 72, 33-48.] is robust to distributional assumption and does not depend on simultaneous modeling of the two variables. We find only causality at higher order lags running from return to volume in the mean as well as in conditional variance. Our result is not in complete agreement with several earlier studies in this area. However, the result does indicate mild support for noise traders' hypothesis in the crude oil futures market. (Author)
Determinations of dose mean of specific energy for conventional x-rays by variance-measurements
International Nuclear Information System (INIS)
Forsberg, B.; Jensen, M.; Lindborg, L.; Samuelson, G.
1978-05-01
The dose mean value (zeta) of specific energy of a single event distribution is related to the variance of a multiple event distribution in a simple way. It is thus possible to determine zeta from measurements in high dose rates through observations of the variations in the ionization current from for instance an ionization chamber, if other parameters contribute negligibly to the total variance. With this method is has earlier been possible to obtain results down to about 10 nm in a beam of Co60-γ rays, which is one order of magnitude smaller than the sizes obtainable with the traditional technique. This advantage together with the suggestion that zeta could be an important parameter in radiobiology make further studies of the applications of the technique motivated. So far only data from measurements in beams of a radioactive nuclide has been reported. This paper contains results from measurements in a highly stabilized X-ray beam. The preliminary analysis shows that the variance technique has given reasonable results for object sizes in the region of 0.08 μm to 20 μm (100 kV, 1.6 Al, HVL 0.14 mm Cu). The results were obtained with a proportional counter except for the larger object sizes, where an ionization chamber was used. The measurements were performed at dose rates between 1 Gy/h and 40 Gy/h. (author)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2017-01-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...
Non-Linear Transaction Costs Inclusion in Mean-Variance Optimization
Directory of Open Access Journals (Sweden)
Christian Johannes Zimmer
2005-12-01
Full Text Available In this article we propose a new way to include transaction costs into a mean-variance portfolio optimization. We consider brokerage fees, bid/ask spread and the market impact of the trade. A pragmatic algorithm is proposed, which approximates the optimal portfolio, and we can show that is converges in the absence of restrictions. Using Brazilian financial market data we compare our approximation algorithm with the results of a non-linear optimizer.
Energy Technology Data Exchange (ETDEWEB)
Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.
2016-11-01
The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)
van Gool, R D; Harris, W F
1997-06-01
Autorefractor measurements were taken on the right eye of 10 students with an external target at vergences -1.00 and -3.00 D. The refractive errors in the form of sphere, cylinder, and axis were converted to vectors h and variance-covariance matrices calculated for different reference meridians. Scatter plots are drawn in symmetric dioptric power space. The profiles of curvital and scaled torsional variances, the scaled torsional fraction, and the scaled torsional-curvital correlation are shown using a polar representation. This form of representation provides a meridional pattern of variation under accommodative demand. The profile for scaled torsional variance is characteristically in the form of a pair of rabbit ears. At both target vergences curvital variance is larger than scaled torsional variance in all the meridians of the eye: the relative magnitudes are quantified by the scaled torsional fraction. An increase in accommodative demand generally results in an increase in variance. The rabbit ears usually become larger but less well divided. The correlation between curvital and torsional powers is usually positive in the first quadrant and negative in the second quadrant. Typical, atypical, and mean typical responses are discussed.
Age-dependent changes in mean and variance of gene expression across tissues in a twin cohort.
Viñuela, Ana; Brown, Andrew A; Buil, Alfonso; Tsai, Pei-Chien; Davies, Matthew N; Bell, Jordana T; Dermitzakis, Emmanouil T; Spector, Timothy D; Small, Kerrin S
2018-02-15
Changes in the mean and variance of gene expression with age have consequences for healthy aging and disease development. Age-dependent changes in phenotypic variance have been associated with a decline in regulatory functions leading to increase in disease risk. Here, we investigate age-related mean and variance changes in gene expression measured by RNA-seq of fat, skin, whole blood and derived lymphoblastoid cell lines (LCLs) expression from 855 adult female twins. We see evidence of up to 60% of age effects on transcription levels shared across tissues, and 47% of those on splicing. Using gene expression variance and discordance between genetically identical MZ twin pairs, we identify 137 genes with age-related changes in variance and 42 genes with age-related discordance between co-twins; implying the latter are driven by environmental effects. We identify four eQTLs whose effect on expression is age-dependent (FDR 5%). Combined, these results show a complicated mix of environmental and genetically driven changes in expression with age. Using the twin structure in our data, we show that additive genetic effects explain considerably more of the variance in gene expression than aging, but less that other environmental factors, potentially explaining why reliable expression-derived biomarkers for healthy-aging have proved elusive compared with those derived from methylation. © The Author(s) 2017. Published by Oxford University Press.
Fritts, Andrea; Knights, Brent C.; Lafrancois, Toben D.; Bartsch, Lynn; Vallazza, Jon; Bartsch, Michelle; Richardson, William B.; Karns, Byron N.; Bailey, Sean; Kreiling, Rebecca
2018-01-01
Fatty acid and stable isotope signatures allow researchers to better understand food webs, food sources, and trophic relationships. Research in marine and lentic systems has indicated that the variance of these biomarkers can exhibit substantial differences across spatial and temporal scales, but this type of analysis has not been completed for large river systems. Our objectives were to evaluate variance structures for fatty acids and stable isotopes (i.e. δ13C and δ15N) of seston, threeridge mussels, hydropsychid caddisflies, gizzard shad, and bluegill across spatial scales (10s-100s km) in large rivers of the Upper Mississippi River Basin, USA that were sampled annually for two years, and to evaluate the implications of this variance on the design and interpretation of trophic studies. The highest variance for both isotopes was present at the largest spatial scale for all taxa (except seston δ15N) indicating that these isotopic signatures are responding to factors at a larger geographic level rather than being influenced by local-scale alterations. Conversely, the highest variance for fatty acids was present at the smallest spatial scale (i.e. among individuals) for all taxa except caddisflies, indicating that the physiological and metabolic processes that influence fatty acid profiles can differ substantially between individuals at a given site. Our results highlight the need to consider the spatial partitioning of variance during sample design and analysis, as some taxa may not be suitable to assess ecological questions at larger spatial scales.
Yang, Yi; Tokita, Midori; Ishiguchi, Akira
2018-01-01
A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.
Excluded-Mean-Variance Neural Decision Analyzer for Qualitative Group Decision Making
Directory of Open Access Journals (Sweden)
Ki-Young Song
2012-01-01
Full Text Available Many qualitative group decisions in professional fields such as law, engineering, economics, psychology, and medicine that appear to be crisp and certain are in reality shrouded in fuzziness as a result of uncertain environments and the nature of human cognition within which the group decisions are made. In this paper we introduce an innovative approach to group decision making in uncertain situations by using a mean-variance neural approach. The key idea of this proposed approach is to compute the excluded mean of individual evaluations and weight it by applying a variance influence function (VIF; this process of weighting the excluded mean by VIF provides an improved result in the group decision making. In this paper, a case study with the proposed excluded-mean-variance approach is also presented. The results of this case study indicate that this proposed approach can improve the effectiveness of qualitative decision making by providing the decision maker with a new cognitive tool to assist in the reasoning process.
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
Reduction of treatment delivery variances with a computer-controlled treatment delivery system
International Nuclear Information System (INIS)
Fraass, B.A.; Lash, K.L.; Matrone, G.M.; Lichter, A.S.
1997-01-01
does not depend on fixed therapist staff on particular machines. Results: The overall reported variance rate (all treatments, machines) was < 0.1 % per port or 0.33 % per treatment session. The rate (per machine) depended on automation and plan complexity (see table). Machine M4 (most complex plans and most automation) had the lowest variance rate. The variance rate decreased with increasing automation in spite of increasing plan complexity, while for the manual machines the variance rate increased with complexity. Note that the real variance rates on the two manual machines must be higher than shown here, while (particularly on M4) virtually all random treatment delivery errors were noted by the CCRS system and its QA checks. Treatment delivery times averaged from 14 to 23 minutes per plan, and depended on ports/plan, although this analysis is complicated by other factors. Conclusion: Use of a sophisticated computer-controlled delivery system for routine patient treatments with complex 3-D conformal plans has led to a significant decrease in treatment delivery variances, while at the same time allowing delivery of increasingly complex and sophisticated conformal plans without a significant increase in treatment time. With renewed vigilance for the possibility of systematic problems, it is clear that use of complete and integrated computer-controlled delivery systems can provide significant improvements in treatment delivery, since better plans can be delivered with significantly fewer errors, and without significantly increasing treatment time
Fringe biasing: A variance reduction technique for optically thick meshes
Energy Technology Data Exchange (ETDEWEB)
Smedley-Stevenson, R. P. [AWE PLC, Aldermaston Reading, Berkshire, RG7 4PR (United Kingdom)
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Fringe biasing: A variance reduction technique for optically thick meshes
International Nuclear Information System (INIS)
Smedley-Stevenson, R. P.
2013-01-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
A Note on the Kinks at the Mean Variance Frontier
Vörös, J.; Kriens, J.; Strijbosch, L.W.G.
1997-01-01
In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.
Variance reduction techniques in the simulation of Markov processes
International Nuclear Information System (INIS)
Lessi, O.
1987-01-01
We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space
A guide to SPSS for analysis of variance
Levine, Gustav
2013-01-01
This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce
A Fay-Herriot Model with Different Random Effect Variances
Czech Academy of Sciences Publication Activity Database
Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.
2011-01-01
Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf
Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D
2013-07-03
Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.
Multilevel variance estimators in MLMC and application for random obstacle problems
Chernov, Alexey
2014-01-06
The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.
Sex Estimation From Modern American Humeri and Femora, Accounting for Sample Variance Structure
DEFF Research Database (Denmark)
Boldsen, J. L.; Milner, G. R.; Boldsen, S. K.
2015-01-01
several decades. Results: For measurements individually and collectively, the probabilities of being one sex or the other were generated for samples with an equal distribution of males and females, taking into account the variance structure of the original measurements. The combination providing the best......Objectives: A new procedure for skeletal sex estimation based on humeral and femoral dimensions is presented, based on skeletons from the United States. The approach specifically addresses the problem that arises from a lack of variance homogeneity between the sexes, taking into account prior...... information about the sample's sex ratio, if known. Material and methods: Three measurements useful for estimating the sex of adult skeletons, the humeral and femoral head diameters and the humeral epicondylar breadth, were collected from 258 Americans born between 1893 and 1980 who died within the past...
Time-Consistent Strategies for a Multiperiod Mean-Variance Portfolio Selection Problem
Directory of Open Access Journals (Sweden)
Huiling Wu
2013-01-01
Full Text Available It remained prevalent in the past years to obtain the precommitment strategies for Markowitz's mean-variance portfolio optimization problems, but not much is known about their time-consistent strategies. This paper takes a step to investigate the time-consistent Nash equilibrium strategies for a multiperiod mean-variance portfolio selection problem. Under the assumption that the risk aversion is, respectively, a constant and a function of current wealth level, we obtain the explicit expressions for the time-consistent Nash equilibrium strategy and the equilibrium value function. Many interesting properties of the time-consistent results are identified through numerical sensitivity analysis and by comparing them with the classical pre-commitment solutions.
A log-sinh transformation for data normalization and variance stabilization
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
Multilevel variance estimators in MLMC and application for random obstacle problems
Chernov, Alexey; Bierig, Claudio
2014-01-01
The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
DEFF Research Database (Denmark)
Campbell, Danny; Mørkbak, Morten Raun; Olsen, Søren Bøye
2018-01-01
In this article we utilize the time respondents require to answer a self-administered online stated preference survey. While the effects of response time have been previously explored, this article proposes a different approach that explicitly recognizes the highly equivocal relationship between ...... between response time and utility coefficients, error variance and processing strategies. Our results thus emphasize the importance of considering response time when modeling stated choice data....... response time and respondents' choices. In particular, we attempt to disentangle preference, variance and processing heterogeneity and explore whether response time helps to explain these three types of heterogeneity. For this, we divide the data (ordered by response time) into approximately equal......-sized subsets, and then derive different class membership probabilities for each subset. We estimate a large number of candidate models and subsequently conduct a frequentist-based model averaging approach using information criteria to derive weights of evidence for each model. Our findings show a clear link...
Kawanishi, Y; Moritomo, H; Omori, S; Kataoka, T; Murase, T; Sugamoto, K
2014-06-01
Positive ulnar variance is associated with ulnar impaction syndrome and ulnar variance is reported to increase with pronation. However, radiographic measurement can be affected markedly by the incident angle of the X-ray beam. We performed three-dimensional (3-D) computed tomography measurements of ulnar variance and ulnolunate distance during forearm rotation and compared these with plain radiographic measurements in 15 healthy wrists. From supination to pronation, ulnar variance increased in all cases on the radiographs; mean ulnar variance increased significantly and mean ulnolunate distance decreased significantly. However on 3-D imaging, ulna variance decreased in 12 cases on moving into pronation and increased in three cases; neither the mean ulnar variance nor mean ulnolunate distance changed significantly. Our results suggest that the forearm position in which ulnar variance increased varies among individuals. This may explain why some patients with ulnar impaction syndrome complain of wrist pain exacerbated by forearm supination. It also suggests that standard radiographic assessments of ulnar variance are unreliable. © The Author(s) 2013.
Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods
Directory of Open Access Journals (Sweden)
David P. Griesheimer
2017-09-01
Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.
Risk-sensitivity and the mean-variance trade-off: decision making in sensorimotor control.
Nagengast, Arne J; Braun, Daniel A; Wolpert, Daniel M
2011-08-07
Numerous psychophysical studies suggest that the sensorimotor system chooses actions that optimize the average cost associated with a movement. Recently, however, violations of this hypothesis have been reported in line with economic theories of decision-making that not only consider the mean payoff, but are also sensitive to risk, that is the variability of the payoff. Here, we examine the hypothesis that risk-sensitivity in sensorimotor control arises as a mean-variance trade-off in movement costs. We designed a motor task in which participants could choose between a sure motor action that resulted in a fixed amount of effort and a risky motor action that resulted in a variable amount of effort that could be either lower or higher than the fixed effort. By changing the mean effort of the risky action while experimentally fixing its variance, we determined indifference points at which participants chose equiprobably between the sure, fixed amount of effort option and the risky, variable effort option. Depending on whether participants accepted a variable effort with a mean that was higher, lower or equal to the fixed effort, they could be classified as risk-seeking, risk-averse or risk-neutral. Most subjects were risk-sensitive in our task consistent with a mean-variance trade-off in effort, thereby, underlining the importance of risk-sensitivity in computational models of sensorimotor control.
Parameter uncertainty effects on variance-based sensitivity analysis
International Nuclear Information System (INIS)
Yu, W.; Harris, T.J.
2009-01-01
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used
Variance of indoor radon concentration: Major influencing factors
Energy Technology Data Exchange (ETDEWEB)
Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.
Variance Component Selection With Applications to Microbiome Taxonomic Data
Directory of Open Access Journals (Sweden)
Jing Zhai
2018-03-01
Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.
MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE
Directory of Open Access Journals (Sweden)
I GEDE ERY NISCAHYANA
2016-08-01
Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution. The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of BNLI stock, 0% of SMDM stock, 1% of SMGR stock.
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
International Nuclear Information System (INIS)
Lee, Tae Hee; Kim, Ho Sung
2010-01-01
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels
PET image reconstruction: mean, variance, and optimal minimax criterion
International Nuclear Information System (INIS)
Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing
2015-01-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)
The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination
Directory of Open Access Journals (Sweden)
Liangping Wu
2014-08-01
Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
The scope and control of attention: Sources of variance in working memory capacity.
Chow, Michael; Conway, Andrew R A
2015-04-01
Working memory capacity is a strong positive predictor of many cognitive abilities, across various domains. The pattern of positive correlations across domains has been interpreted as evidence for a unitary source of inter-individual differences in behavior. However, recent work suggests that there are multiple sources of variance contributing to working memory capacity. The current study (N = 71) investigates individual differences in the scope and control of attention, in addition to the number and resolution of items maintained in working memory. Latent variable analyses indicate that the scope and control of attention reflect independent sources of variance and each account for unique variance in general intelligence. Also, estimates of the number of items maintained in working memory are consistent across tasks and related to general intelligence whereas estimates of resolution are task-dependent and not predictive of intelligence. These results provide insight into the structure of working memory, as well as intelligence, and raise new questions about the distinction between number and resolution in visual short-term memory.
Wright, George W; Simon, Richard M
2003-12-12
Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf
The contribution of the mitochondrial genome to sex-specific fitness variance.
Smith, Shane R T; Connallon, Tim
2017-05-01
Maternal inheritance of mitochondrial DNA (mtDNA) facilitates the evolutionary accumulation of mutations with sex-biased fitness effects. Whereas maternal inheritance closely aligns mtDNA evolution with natural selection in females, it makes it indifferent to evolutionary changes that exclusively benefit males. The constrained response of mtDNA to selection in males can lead to asymmetries in the relative contributions of mitochondrial genes to female versus male fitness variation. Here, we examine the impact of genetic drift and the distribution of fitness effects (DFE) among mutations-including the correlation of mutant fitness effects between the sexes-on mitochondrial genetic variation for fitness. We show how drift, genetic correlations, and skewness of the DFE determine the relative contributions of mitochondrial genes to male versus female fitness variance. When mutant fitness effects are weakly correlated between the sexes, and the effective population size is large, mitochondrial genes should contribute much more to male than to female fitness variance. In contrast, high fitness correlations and small population sizes tend to equalize the contributions of mitochondrial genes to female versus male variance. We discuss implications of these results for the evolution of mitochondrial genome diversity and the genetic architecture of female and male fitness. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Estimation of the biserial correlation and its sampling variance for use in meta-analysis.
Jacobs, Perke; Viechtbauer, Wolfgang
2017-06-01
Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Prochaska, John D; Buschmann, Robert N; Jupiter, Daniel; Mutambudzi, Miriam; Peek, M Kristen
2018-06-01
Research suggests a linkage between perceptions of neighborhood quality and the likelihood of engaging in leisure-time physical activity. Often in these studies, intra-neighborhood variance is viewed as something to be controlled for statistically. However, we hypothesized that intra-neighborhood variance in perceptions of neighborhood quality may be contextually relevant. We examined the relationship between intra-neighborhood variance of subjective neighborhood quality and neighborhood-level reported physical inactivity across 48 neighborhoods within a medium-sized city, Texas City, Texas using survey data from 2706 residents collected between 2004 and 2006. Neighborhoods where the aggregated perception of neighborhood quality was poor also had a larger proportion of residents reporting being physically inactive. However, higher degrees of disagreement among residents within neighborhoods about their neighborhood quality was significantly associated with a lower proportion of residents reporting being physically inactive (p=0.001). Our results suggest that intra-neighborhood variability may be contextually relevant in studies seeking to better understand the relationship between neighborhood quality and behaviors sensitive to neighborhood environments, like physical activity. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Mariúcha Nóbrega Bezerra
2016-09-01
Full Text Available This paper aims to analyze the efficacy of variance and measures of downside risk for of formation of investment portfolios in the Brazilian stock market. Using the methodologies of Ang (1975, Markowitz et al. (1993, Ballestero (2005, Estrada (2008 and Cumova and Nawrocki (2011, sought to find what the best method to solve the problem of asymmetric and endogenous matrix and, inspired by the work of Markowitz (1952 and Lohre, Neumann and Winterfeldt (2010, intended to be seen which risk metric is most suitable for the realization of more efficient allocation of resources in the stock market in Brazil. The sample was composed of stocks of IBrX 50, from 2000 to 2013. The results indicated that when the semivariance was used as a measure of asymmetric risk, if the investor can use more refined models for solving the problem of asymmetric semivariance-cosemivariance matrix, the model of Cumova and Nawrocki (2011 will be more effective. Furthermore, from the Brazilian data, VaR had become more effective than variance and other measures of downside risk with respect to minimizing the risk of loss. Thus, taken the assumption that the investor has asymmetric preferences regarding risk, forming portfolios of stocks in the Brazilian market is more efficient when using criteria of minimizing downside risk than the traditional mean-variance approach.
Mean and variance evolutions of the hot and cold temperatures in Europe
Energy Technology Data Exchange (ETDEWEB)
Parey, Sylvie [EDF/R and D, Chatou Cedex (France); Dacunha-Castelle, D. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); Hoang, T.T.H. [Universite Paris 11, Laboratoire de Mathematiques, Orsay (France); EDF/R and D, Chatou Cedex (France)
2010-02-15
In this paper, we examine the trends of temperature series in Europe, for the mean as well as for the variance in hot and cold seasons. To do so, we use as long and homogenous series as possible, provided by the European Climate Assessment and Dataset project for different locations in Europe, as well as the European ENSEMBLES project gridded dataset and the ERA40 reanalysis. We provide a definition of trends that we keep as intrinsic as possible and apply non-parametric statistical methods to analyse them. Obtained results show a clear link between trends in mean and variance of the whole series of hot or cold temperatures: in general, variance increases when the absolute value of temperature increases, i.e. with increasing summer temperature and decreasing winter temperature. This link is reinforced in locations where winter and summer climate has more variability. In very cold or very warm climates, the variability is lower and the link between the trends is weaker. We performed the same analysis on outputs of six climate models proposed by European teams for the 1961-2000 period (1950-2000 for one model), available through the PCMDI portal for the IPCC fourth assessment climate model simulations. The models generally perform poorly and have difficulties in capturing the relation between the two trends, especially in summer. (orig.)
Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control
Directory of Open Access Journals (Sweden)
Shanshan Gu
2015-01-01
Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.
Hao, Wenrui; Lu, Zhenzhou; Li, Luyi
2013-05-01
In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.
Genung, Mark A; Fox, Jeremy; Williams, Neal M; Kremen, Claire; Ascher, John; Gibbs, Jason; Winfree, Rachael
2017-07-01
The relationship between biodiversity and the stability of ecosystem function is a fundamental question in community ecology, and hundreds of experiments have shown a positive relationship between species richness and the stability of ecosystem function. However, these experiments have rarely accounted for common ecological patterns, most notably skewed species abundance distributions and non-random extinction risks, making it difficult to know whether experimental results can be scaled up to larger, less manipulated systems. In contrast with the prolific body of experimental research, few studies have examined how species richness affects the stability of ecosystem services at more realistic, landscape scales. The paucity of these studies is due in part to a lack of analytical methods that are suitable for the correlative structure of ecological data. A recently developed method, based on the Price equation from evolutionary biology, helps resolve this knowledge gap by partitioning the effect of biodiversity into three components: richness, composition, and abundance. Here, we build on previous work and present the first derivation of the Price equation suitable for analyzing temporal variance of ecosystem services. We applied our new derivation to understand the temporal variance of crop pollination services in two study systems (watermelon and blueberry) in the mid-Atlantic United States. In both systems, but especially in the watermelon system, the stronger driver of temporal variance of ecosystem services was fluctuations in the abundance of common bee species, which were present at nearly all sites regardless of species richness. In contrast, temporal variance of ecosystem services was less affected by differences in species richness, because lost and gained species were rare. Thus, the findings from our more realistic landscapes differ qualitatively from the findings of biodiversity-stability experiments. © 2017 by the Ecological Society of America.
Spatially tuned normalization explains attention modulation variance within neurons.
Ni, Amy M; Maunsell, John H R
2017-09-01
Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical
Estimation of measurement variance in the context of environment statistics
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Risk Management - Variance Minimization or Lower Tail Outcome Elimination
DEFF Research Database (Denmark)
Aabo, Tom
2002-01-01
on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations....
Draft no-migration variance petition. Volume 1
International Nuclear Information System (INIS)
1995-01-01
The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2,6 million cubic feet of these waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is Volume 1 which discusses the regulatory frame work, site characterization, facility description, waste description, environmental impact analysis, monitoring, quality assurance, long-term compliance analysis, and regulatory compliance assessment
Cosmic variance in inflation with two light scalars
Energy Technology Data Exchange (ETDEWEB)
Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie; Shandera, Sarah, E-mail: bpb165@psu.edu, E-mail: suddhasattwa.brahma@gmail.com, E-mail: asdeutsch@psu.edu, E-mail: shandera@gravity.psu.edu [Institute for Gravitation and the Cosmos and Physics Department, The Pennsylvania State University, University Park, PA, 16802 (United States)
2016-05-01
We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always has the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2004-01-01
Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)
Directory of Open Access Journals (Sweden)
G. R. Pasha
2006-07-01
Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.
International Nuclear Information System (INIS)
Dumonteil, E.; Diop, C. M.
2009-01-01
This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)
Competitiveness of firms, performance and customer orientation measures – empirical survey results
Directory of Open Access Journals (Sweden)
Alena Klapalová
2011-01-01
Full Text Available The purpose of this paper is to presents results from two empirical surveys concerning selected factors which can be connected to customer orientation, performance and competitiveness of firms. The purpose of the surveys was also to reveal potential differences between sectors arising from not only the different influences of internal but as well as external environment. A survey instrument was developed to analyse the relationship between several variables measuring customer orientation of surveyed firms and between these factors and level of financial performance. Several statistical methods were applied to analyse the data, specifically descriptive statistics (means and standard deviations, one-way analysis of variance (ANOVA with Bonferroni post-hoc test using financial performance for clustering firms and for assessment of potential differences of customer orientation criteria evaluation and Spearman rank correlation coefficients to assess the linear bivariate relationship between customer orientation variables. The results of ANOVA show that only the innovativeness is distinctive distinguishing criteria in conformity with the indicators of financial prosperity and that there are some differences between companies from two groups of sectors within the managers’ perception of customer orientation criteria performance.
Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L
2012-12-01
The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).
Sztepanacz, Jacqueline L; Rundle, Howard D
2012-10-01
Directional selection is prevalent in nature, yet phenotypes tend to remain relatively constant, suggesting a limit to trait evolution. However, the genetic basis of this limit is unresolved. Given widespread pleiotropy, opposing selection on a trait may arise from the effects of the underlying alleles on other traits under selection, generating net stabilizing selection on trait genetic variance. These pleiotropic costs of trait exaggeration may arise through any number of other traits, making them hard to detect in phenotypic analyses. Stabilizing selection can be inferred, however, if genetic variance is greater among low- compared to high-fitness individuals. We extend a recently suggested approach to provide a direct test of a difference in genetic variance for a suite of cuticular hydrocarbons (CHCs) in Drosophila serrata. Despite strong directional sexual selection on these traits, genetic variance differed between high- and low-fitness individuals and was greater among the low-fitness males for seven of eight CHCs, significantly more than expected by chance. Univariate tests of a difference in genetic variance were nonsignificant but likely have low power. Our results suggest that further CHC exaggeration in D. serrata in response to sexual selection is limited by pleiotropic costs mediated through other traits. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Hackerott, João A.; Bakhoday Paskyabi, Mostafa; Reuder, Joachim; de Oliveira, Amauri P.; Kral, Stephan T.; Marques Filho, Edson P.; Mesquita, Michel dos Santos; de Camargo, Ricardo
2017-11-01
We discuss scalar similarities and dissimilarities based on analysis of the dissipation terms in the variance budget equations, considering the turbulent kinetic energy and the variances of temperature, specific humidity and specific CO_2 content. For this purpose, 124 high-frequency sampled segments are selected from the Boundary Layer Late Afternoon and Sunset Turbulence experiment. The consequences of dissipation similarity in the variance transport are also discussed and quantified. The results show that, for the convective atmospheric surface layer, the non-dimensional dissipation terms can be expressed in the framework of Monin-Obukhov similarity theory and are independent of whether the variable is temperature or moisture. The scalar similarity in the dissipation term implies that the characteristic scales of the atmospheric surface layer can be estimated from the respective rate of variance dissipation, the characteristic scale of temperature, and the dissipation rate of temperature variance.
Jiang, Yu; Yang, Jiacheng; Gagné, Stéphanie; Chan, Tak W.; Thomson, Kevin; Fofie, Emmanuel; Cary, Robert A.; Rutherford, Dan; Comer, Bryan; Swanson, Jacob; Lin, Yue; Van Rooy, Paul; Asa-Awuku, Akua; Jung, Heejung; Barsanti, Kelley; Karavalakis, Georgios; Cocker, David; Durbin, Thomas D.; Miller, J. Wayne; Johnson, Kent C.
2018-06-01
Knowledge of black carbon (BC) emission factors from ships is important from human health and environmental perspectives. A study of instruments measuring BC and fuels typically used in marine operation was carried out on a small marine engine. Six analytical methods measured the BC emissions in the exhaust of the marine engine operated at two load points (25% and 75%) while burning one of three fuels: a distillate marine (DMA), a low sulfur, residual marine (RMB-30) and a high-sulfur residual marine (RMG-380). The average emission factors with all instruments increased from 0.08 to 1.88 gBC/kg fuel in going from 25 to 75% load. An analysis of variance (ANOVA) tested BC emissions against instrument, load, and combined fuel properties and showed that both engine load and fuels had a statistically significant impact on BC emission factors. While BC emissions were impacted by the fuels used, none of the fuel properties investigated (sulfur content, viscosity, carbon residue and CCAI) was a primary driver for BC emissions. Of the two residual fuels, RMB-30 with the lower sulfur content, lower viscosity and lower residual carbon, had the highest BC emission factors. BC emission factors determined with the different instruments showed a good correlation with the PAS values with correlation coefficients R2 >0.95. A key finding of this research is the relative BC measured values were mostly independent of load and fuel, except for some instruments in certain fuel and load combinations.
Eastman-Mueller, Heather P; Oswalt, Sara B
2017-10-01
To conduct a trend analysis of Pap test practices, Pap test results and related women's services and guidelines of college health centers. College health centers who participated in the annual ACHA Pap Test and STI (sexually transmitted infection) Survey years 2004-2014 (n ranged from 127 to 181 depending on year). Descriptive analyses are presented with ANOVAs (Analysis of Variance) and chi-square tests calculated to examine trends over time. The number of Pap tests significantly decreased over time; however, the percentage of normal and HSIL (high-grade squamous intraepithelial lesion) results did not vary. Availability of conventional cytology slides and cryotherapy were significantly associated with year. Over time, college health centers' guidelines related to initiation of Pap testing evolved to consistently conform to national recommendations for cervical screening. The results indicate most college health centers are following the current national guidelines regarding Pap testing for young adult women.
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Ma, Hui-qiang
2014-01-01
We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...
Directory of Open Access Journals (Sweden)
Goutsias John
2010-05-01
Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the
DEFF Research Database (Denmark)
Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes
2017-01-01
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...
Cohen, Joel E; Xu, Meng; Schuster, William S F
2012-09-25
Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.
On the expected value and variance for an estimator of the spatio-temporal product density function
DEFF Research Database (Denmark)
Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge
Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...
International Nuclear Information System (INIS)
Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu
2003-01-01
Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)
Sharma, P.; Kumawat, J.; Kumar, S.; Sahu, K.; Verma, Y.; Gupta, P. K.; Rao, K. D.
2018-02-01
We report on a study to assess the feasibility of a swept source-based speckle variance optical coherence tomography setup for monitoring cutaneous microvasculature. Punch wounds created in the ear pinnae of diabetic mice were monitored at different times post wounding to assess the structural and vascular changes. It was observed that the epithelium thickness increases post wounding and continues to be thick even after healing. Also, the wound size assessed by vascular images is larger than the physical wound size. The results show that the developed speckle variance optical coherence tomography system can be used to monitor vascular regeneration during wound healing in diabetic mice.
Adaptive increase in force variance during fatigue in tasks with low redundancy.
Singh, Tarkeshwar; S K M, Varadhan; Zatsiorsky, Vladimir M; Latash, Mark L
2010-11-26
We tested a hypothesis that fatigue of an element (a finger) leads to an adaptive neural strategy that involves an increase in force variability in the other finger(s) and an increase in co-variation of commands to fingers to keep total force variability relatively unchanged. We tested this hypothesis using a system with small redundancy (two fingers) and a marginally redundant system (with an additional constraint related to the total moment of force produced by the fingers, unstable condition). The subjects performed isometric accurate rhythmic force production tasks by the index (I) finger and two fingers (I and middle, M) pressing together before and after a fatiguing exercise by the I finger. Fatigue led to a large increase in force variance in the I-finger task and a smaller increase in the IM-task. We quantified two components of variance in the space of hypothetical commands to fingers, finger modes. Under both stable and unstable conditions, there was a large increase in the variance component that did not affect total force and a much smaller increase in the component that did. This resulted in an increase in an index of the force-stabilizing synergy. These results indicate that marginal redundancy is sufficient to allow the central nervous system to use adaptive increase in variability to shield important variables from effects of fatigue. We offer an interpretation of these results based on a recent development of the equilibrium-point hypothesis known as the referent configuration hypothesis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
The pricing of long and short run variance and correlation risk in stock returns
Cosemans, M.
2011-01-01
This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk