Importance Sampling Variance Reduction in GRESS ATMOSIM
Wakeford, Daniel Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-26
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Ashton M Verdery
Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Improved variance estimation along sample eigenvectors
Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk
2009-01-01
Second order statistics estimates in the form of sample eigenvalues and sample eigenvectors give a sub optimal description of the population density. So far only attempts have been made to reduce the bias in the sample eigenvalues. However, because the sample eigenvectors differ from the population
Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse
Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou
2016-01-01
In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.
Research on variance of subnets in network sampling
Qi Gao; Xiaoting Li; Feng Pan
2014-01-01
In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.
An Analysis of Variance Framework for Matrix Sampling.
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Applications of non-parametric statistics and analysis of variance on sample variances
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Estimating Income Variances by Probability Sampling: A Case Study
Akbar Ali Shah
2010-08-01
Full Text Available The main focus of the study is to estimate variability in income distribution of households by conducting a survey. The variances in income distribution have been calculated by probability sampling techniques. The variances are compared and relative gains are also obtained. It is concluded that the income distribution has been better as compared to first Household Income and Expenditure Survey (HIES conducted in Pakistan 1993-94.
Meta-analysis of ratios of sample variances.
Prendergast, Luke A; Staudte, Robert G
2016-05-20
When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.
Bounds for Tail Probabilities of the Sample Variance
V. Bentkus
2009-01-01
Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.
Variance optimal sampling based estimation of subset sums
Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel
2008-01-01
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.
Sample variance and Lyman-alpha forest transmission statistics
Rollinde, Emmanuel; Schaye, Joop; Pâris, Isabelle; Petitjean, Patrick
2012-01-01
We compare the observed probability distribution function of the transmission in the \\HI\\ Lyman-alpha forest, measured from the UVES 'Large Programme' sample at redshifts z=[2,2.5,3], to results from the GIMIC cosmological simulations. Our measured values for the mean transmission and its PDF are in good agreement with published results. Errors on statistics measured from high-resolution data are typically estimated using bootstrap or jack-knife resampling techniques after splitting the spectra into chunks. We demonstrate that these methods tend to underestimate the sample variance unless the chunk size is much larger than is commonly the case. We therefore estimate the sample variance from the simulations. We conclude that observed and simulated transmission statistics are in good agreement, in particular, we do not require the temperature-density relation to be 'inverted'.
[Variance estimation considering multistage sampling design in multistage complex sample analysis].
Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng
2016-03-01
Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative s
Adewunmi, Adrian; Byrne, Mike
2008-01-01
This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.
LI Yan; SHI Zhou; WU Ci-fang; LI Feng; LI Hong-yi
2007-01-01
The acquisition of precise soil data representative of the entire survey area,is a critical issue for many treatments such as irrigation or fertilization in precision agriculture.The aim of this study was to investigate the spatial variability of soil bulk electrical conductivity(ECb)in a coastal saline field and design an optimized spatial sampling scheme of ECb based on a sampling design algorithm,the variance quad-tree(VQT)method.Soil ECb data were collected from the field at 20m interval in a regular grid scheme.The smooth contour map of the whole field was obtained by ordinary kriging interpolation,VQT algorithm was then used to split the smooth contour map into strata of different number desired,the sampling locations can be selected within each stratum in subsequent sampling.The result indicated that the probability of choosing representative sampling sites was increased significantly by using VQT method with the sampling number being greatly reduced compared to grid sampling design while retaining the same prediction accuracy.The advantage of the VQT method is that this scheme samples sparsely in fields where the spatial variability is relatively uniform and more intensive where the variability is large.Thus the sampling efficiency can be improved,hence facilitate an assessment methodology that can be applied in a rapid,practical and cost-effective manner.
Importance Sampling Variance Reduction for the Fokker-Planck Rarefied Gas Particle Method
Collyer, Benjamin; Lockerby, Duncan
2015-01-01
Models and methods that are able to accurately and efficiently predict the flows of low-speed rarefied gases are in high demand, due to the increasing ability to manufacture devices at micro and nano scales. One such model and method is a Fokker-Planck approximation to the Boltzmann equation, which can be solved numerically by a stochastic particle method. The stochastic nature of this method leads to noisy estimates of the thermodynamic quantities one wishes to sample when the signal is small in comparison to the thermal velocity of the gas. Recently, Gorji et al have proposed a method which is able to greatly reduce the variance of the estimators, by creating a correlated stochastic process which acts as a control variate for the noisy estimates. However, there are potential difficulties involved when the geometry of the problem is complex, as the method requires the density to be solved for independently. Importance sampling is a variance reduction technique that has already been shown to successfully redu...
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd.
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam
New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces
Sidick, Erkin
2010-01-01
Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).
Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise
Donghui Li; Li Guo
2006-01-01
@@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.
Sample correlations of infinite variance time series models: an empirical and theoretical study
Jason Cohen
1998-01-01
Full Text Available When the elements of a stationary ergodic time series have finite variance the sample correlation function converges (with probability 1 to the theoretical correlation function. What happens in the case where the variance is infinite? In certain cases, the sample correlation function converges in probability to a constant, but not always. If within a class of heavy tailed time series the sample correlation functions do not converge to a constant, then more care must be taken in making inferences and in model selection on the basis of sample autocorrelations. We experimented with simulating various heavy tailed stationary sequences in an attempt to understand what causes the sample correlation function to converge or not to converge to a constant. In two new cases, namely the sum of two independent moving averages and a random permutation scheme, we are able to provide theoretical explanations for a random limit of the sample autocorrelation function as the sample grows.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).
Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling
Petr Novák
2012-03-01
Full Text Available This study is aimed at variance computation techniques for estimates of population characteristics based on survey sampling and imputation. We use the superpopulation regression model, which means that the target variable values for each statistical unit are treated as random realizations of a linear regression model with weighted variance. We focus on regression models with one auxiliary variable and no intercept, which have many applications and straightforward interpretation in business statistics. Furthermore, we deal with caseswhere the estimates are not independent and thus the covariance must be computed. We also consider chained regression models with auxiliary variables as random variables instead of constants.
Estimation variance bounds of importance sampling simulations in digital communication systems
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Estimation variance bounds of importance sampling simulations in digital communication systems
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Using variance components to estimate power in a hierarchically nested sampling design.
Dzul, Maria C; Dixon, Philip M; Quist, Michael C; Dinsmore, Stephen J; Bower, Michael R; Wilson, Kevin P; Gaines, D Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007-2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling
Salganik, Matthew J
2006-01-01
.... A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence...
Estimation of the biserial correlation and its sampling variance for use in meta-analysis.
Jacobs, Perke; Viechtbauer, Wolfgang
2017-06-01
Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Neeraj Tiwari
2014-06-01
Full Text Available Under inclusion probability proportional to size (IPPS sampling, the exact secondorder inclusion probabilities are often very difficult to obtain, and hence variance of the Horvitz- Thompson estimator and Sen-Yates-Grundy estimate of the variance of Horvitz-Thompson estimator are difficult to compute. Hence the researchers developed some alternative variance estimators based on approximations of the second-order inclusion probabilities in terms of the first order inclusion probabilities. We have numerically compared the performance of the various alternative approximate variance estimators using the split method of sample selection
COMPARISON OF VARIANCE ESTIMATORS FOR THE RATIO ESTIMATOR BASED ON SMALL SAMPLE
秦怀振; 李莉莉
2001-01-01
This paper sheds light on an open problem put forward by Cochran[1]. The comparison between two commonly used variance estimators v1(R) and v2(R ) of the ratio estimator for population ratio R from small sample selected by simple random sampling is made following the idea of the estimated loss approach (See [2]). Considering the superpopulation model under which the ratio estimator -YR for population mean -Y is the best linear unbiased one, the necessary and sufficient conditions for v1(R)u(-)v2(R) and v2(R)u(-) v1(R) are obtained with ignored the sampling fraction f. For a substantial f, several rigorous sufficient conditions for v2(R)u(-)v1(R) are derived.
Ismet DOGAN
2015-10-01
Full Text Available Objective: Choosing the most efficient statistical test is one of the essential problems of statistics. Asymptotic relative efficiency is a notion which enables to implement in large samples the quantitative comparison of two different tests used for testing of the same statistical hypothesis. The notion of the asymptotic efficiency of tests is more complicated than that of asymptotic efficiency of estimates. This paper discusses the effect of sample size on expected values and variances of non-parametric tests for independent two samples and determines the most effective test for different sample sizes using Fraser efficiency value. Material and Methods: Since calculating the power value in comparison of the tests is not practical most of the time, using the asymptotic relative efficiency value is favorable. Asymptotic relative efficiency is an indispensable technique for comparing and ordering statistical test in large samples. It is especially useful in nonparametric statistics where there exist numerous heuristic tests such as the linear rank tests. In this study, the sample size is determined as 2 ≤ n ≤ 50. Results: In both balanced and unbalanced cases, it is found that, as the sample size increases expected values and variances of all the tests discussed in this paper increase as well. Additionally, considering the Fraser efficiency, Mann-Whitney U test is found as the most efficient test among the non-parametric tests that are used in comparison of independent two samples regardless of their sizes. Conclusion: According to Fraser efficiency, Mann-Whitney U test is found as the most efficient test.
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Oranje, Andreas
2006-01-01
A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
heterogeneous doses. On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 x 1 x 1 mm{sup 3}) and breast (0.67 x 0.67 x 0.8 mm{sup 3}) CTVs, respectively. Conclusions: CMC supports an additional average 38-60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study.
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data
Greve, Douglas N; Svarer, Claus; Fisher, Patrick M
2014-01-01
intersubject variance than when volume smoothing was used. This translates into more than 4 times fewer subjects needed in a group analysis to achieve similarly powered statistical tests. Surface-based smoothing has less bias and variance because it respects cortical geometry by smoothing the PET data only...
Schmitt, J Eric; Lenroot, Rhoshel K; Ordaz, Sarah E; Wallace, Gregory L; Lerch, Jason P; Evans, Alan C; Prom, Elizabeth C; Kendler, Kenneth S; Neale, Michael C; Giedd, Jay N
2009-08-01
The role of genetics in driving intracortical relationships is an important question that has rarely been studied in humans. In particular, there are no extant high-resolution imaging studies on genetic covariance. In this article, we describe a novel method that combines classical quantitative genetic methodologies for variance decomposition with recently developed semi-multivariate algorithms for high-resolution measurement of phenotypic covariance. Using these tools, we produced correlational maps of genetic and environmental (i.e. nongenetic) relationships between several regions of interest and the cortical surface in a large pediatric sample of 600 twins, siblings, and singletons. These analyses demonstrated high, fairly uniform, statistically significant genetic correlations between the entire cortex and global mean cortical thickness. In agreement with prior reports on phenotypic covariance using similar methods, we found that mean cortical thickness was most strongly correlated with association cortices. However, the present study suggests that genetics plays a large role in global brain patterning of cortical thickness in this manner. Further, using specific gyri with known high heritabilities as seed regions, we found a consistent pattern of high bilateral genetic correlations between structural homologues, with environmental correlations more restricted to the same hemisphere as the seed region, suggesting that interhemispheric covariance is largely genetically mediated. These findings are consistent with the limited existing knowledge on the genetics of cortical variability as well as our prior multivariate studies on cortical gyri.
Rogan, Joanne C.; Keselman, H. J.
1977-01-01
The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)
Sex estimation from modern American humeri and femora, accounting for sample variance structure
Boldsen, Jesper L; Milner, George R; Boldsen, Søren K
2015-01-01
Objectives: A new procedure for skeletal sex estimation based on humeral and femoral dimensions is presented, based on skeletons from the United States. The approach specifically addresses the problem that arises from a lack of variance homogeneity between the sexes, taking into account prior inf...
Increased taxon sampling greatly reduces phylogenetic error.
Zwickl, Derrick J; Hillis, David M
2002-08-01
Several authors have argued recently that extensive taxon sampling has a positive and important effect on the accuracy of phylogenetic estimates. However, other authors have argued that there is little benefit of extensive taxon sampling, and so phylogenetic problems can or should be reduced to a few exemplar taxa as a means of reducing the computational complexity of the phylogenetic analysis. In this paper we examined five aspects of study design that may have led to these different perspectives. First, we considered the measurement of phylogenetic error across a wide range of taxon sample sizes, and conclude that the expected error based on randomly selecting trees (which varies by taxon sample size) must be considered in evaluating error in studies of the effects of taxon sampling. Second, we addressed the scope of the phylogenetic problems defined by different samples of taxa, and argue that phylogenetic scope needs to be considered in evaluating the importance of taxon-sampling strategies. Third, we examined the claim that fast and simple tree searches are as effective as more thorough searches at finding near-optimal trees that minimize error. We show that a more complete search of tree space reduces phylogenetic error, especially as the taxon sample size increases. Fourth, we examined the effects of simple versus complex simulation models on taxonomic sampling studies. Although benefits of taxon sampling are apparent for all models, data generated under more complex models of evolution produce higher overall levels of error and show greater positive effects of increased taxon sampling. Fifth, we asked if different phylogenetic optimality criteria show different effects of taxon sampling. Although we found strong differences in effectiveness of different optimality criteria as a function of taxon sample size, increased taxon sampling improved the results from all the common optimality criteria. Nonetheless, the method that showed the lowest overall
Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier
2016-01-01
The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).
Frank M.You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier
2016-01-01
The type 2 modified augmented design(MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters.Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline(http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).
Frank M. You
2016-04-01
Full Text Available The type 2 modified augmented design (MAD2 is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html.
Female Scarcity Reduces Women's Marital Ages and Increases Variance in Men's Marital Ages
Daniel J. Kruger
2010-07-01
Full Text Available When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.
Evidence of reduced mid-Holocene ENSO variance on the Great Barrier Reef, Australia
Leonard, N. D.; Welsh, K. J.; Lough, J. M.; Feng, Y.-x.; Pandolfi, J. M.; Clark, T. R.; Zhao, J.-x.
2016-09-01
Globally, coral reefs are under increasing pressure both through direct anthropogenic influence and increases in climate extremes. Understanding past climate dynamics that negatively affected coral reef growth is imperative for both improving management strategies and for modeling coral reef responses to a changing climate. The El Niño-Southern Oscillation (ENSO) is the primary source of climate variability at interannual timescales on the Great Barrier Reef (GBR), northeastern Australia. Applying continuous wavelet transforms to visually assessed coral luminescence intensity in massive Porites corals from the central GBR we demonstrate that these records reliably reproduce ENSO variance patterns for the period 1880-1985. We then applied this method to three subfossil corals from the same reef to reconstruct ENSO variance from ~5200 to 4300 years before present (yBP). We show that ENSO events were less extreme and less frequent after ~5200 yBP on the GBR compared to modern records. Growth characteristics of the corals are consistent with cooler sea surface temperatures (SSTs) between 5200 and 4300 yBP compared to both the millennia prior (~6000 yBP) and modern records. Understanding ENSO dynamics in response to SST variability at geological timescales will be important for improving predictions of future ENSO response to a rapidly warming climate.
Comments on “Estimating Income Variances by Probability Sampling: A Case Study by Shah and Aleem”
Jamal Abdul Nasir
2012-06-01
Full Text Available In this article, we wish to write comments on recently published article “Shah, A.A. and Aleem, M. (2010. Estimating income variances by probability sampling: a case study. Pakistan Journal of Commerce and Social Sciences, 4(2, 194-201”, which suggest improvement as well as criticism on the paper and also contribute effectively towardsjournal repute and ranking.
Böhmer, L; Hildebrandt, G
1998-01-01
In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place
A variance-reduced electrothermal Monte Carlo method for semiconductor device simulation
Muscato, Orazio; Di Stefano, Vincenza [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) Leibniz-Institut im Forschungsverbund Berlin e.V., Berlin (Germany)
2012-11-01
This paper is concerned with electron transport and heat generation in semiconductor devices. An improved version of the electrothermal Monte Carlo method is presented. This modification has better approximation properties due to reduced statistical fluctuations. The corresponding transport equations are provided and results of numerical experiments are presented.
Sampling returns for realized variance calculations: tick time or transaction time?
Griffin, J.E.; Oomen, R.C.A.
2008-01-01
This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
2017-07-15
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.
Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C
2010-04-09
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.
Reduced K-best sphere decoding algorithm based on minimum route distance and noise variance
Xinyu Mao; Jianjun Wu; Haige Xiang
2014-01-01
This paper focuses on reducing the complexity of K-best sphere decoding (SD) algorithm for the detection of uncoded multi-ple input multiple output (MIMO) systems. The proposed algorithm utilizes the threshold-pruning method to cut nodes with partial Euclidean distances (PEDs) larger than the threshold. Both the known noise value and the unknown noise value are considered to generate the threshold, which is the sum of the two values. The known noise value is the smal est PED of signals in the detected layers. The unknown noise value is generated by the noise power, the quality of service (QoS) and the signal-to-noise ratio (SNR) bound. Simulation results show that by considering both two noise values, the proposed algorithm makes an efficient reduction while the performance drops little.
Athanasios Chasiotis
2014-04-01
Full Text Available We investigated the effect of the childhood context variables number of siblings (study 1 and 2 and parental SES (study 2 on implicit parenting motivation across six cultural samples, including Africa (2xCameroon, Asia (PR China, Europe (2xGermany, and Latin America (Costa Rica. Implicit parenting motivation was assessed using an instrument measuring implicit motives (OMT, Operant Multimotive Test; Kuhl and Scheffer, 2001. Replicating and extending results from previous studies, regression analyses and structural equation models show that the number of siblings and parental SES explain a large amount of cultural variance, ranging from 64% to 82% of the cultural variance observed in implicit parenting motivation. Results are discussed within the framework of evolutionary developmental psychology.
Eack, Shaun M; Pogue-Geile, Michael F; Greeno, Catherine G; Keshavan, Matcheri S
2009-10-01
The Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) is a key measure of social cognition recommended by the MATRICS committee. While the psychometric properties of the MSCEIT appear strong, previous evidence suggested its factor structure may have shifted when applied to schizophrenia patients, posing important implications for cross-group comparisons. Using multi-group confirmatory factor analysis, we explicitly tested the factorial invariance of the MSCEIT across schizophrenia (n=64) and two normative samples (n=2099 and 451). Results indicated that the factor structure of the MSCEIT was significantly different between the schizophrenia and normative samples. Implications for future research are discussed.
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
A software sampling frequency adaptive algorithm for reducing spectral leakage
PAN Li-dong; WANG Fei
2006-01-01
Spectral leakage caused by synchronous error in a nonsynchronous sampling system is an important cause that reduces the accuracy of spectral analysis and harmonic measurement.This paper presents a software sampling frequency adaptive algorithm that can obtain the actual signal frequency more accurately,and then adjusts sampling interval base on the frequency calculated by software algorithm and modifies sampling frequency adaptively.It can reduce synchronous error and impact of spectral leakage;thereby improving the accuracy of spectral analysis and harmonic measurement for power system signal where frequency changes slowly.This algorithm has high precision just like the simulations show,and it can be a practical method in power system harmonic analysis since it can be implemented easily.
Grigorenko, Elena L; Geiser, Christian; Slobodskaya, Helena R; Francis, David J
2010-12-01
A large community-based sample of Russian youths (n = 841, age M = 13.17 years, SD = 2.51) was assessed with the Child Behavior Checklist (mothers and fathers separately), Teacher's Report Form, and Youth Self-Report. The multiple indicator-version of the correlated trait-correlated method minus one, or CT-C(M - 1), model was applied to analyze (a) the convergent and divergent validity of these instruments in Russia, (b) the degree of trait-specificity of rater biases, and (c) potential predictors of rater-specific effects. As expected, based on the published results from different countries and in different languages, the convergent validity of the instruments was rather high between mother and father reports, but rather low for parent, teacher, and self-reports. For self- and teacher reports, rater-specific effects were related to age and gender of the children for some traits. These results, once again, attest to the importance of incorporating information from multiple observers when psychopathological traits are evaluated in children and adolescents.
Henn, Julian; Meindl, Kathrin
2015-03-01
Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature.
Fixed effects analysis of variance
Fisher, Lloyd; Birnbaum, Z W; Lukacs, E
1978-01-01
Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi
Khewal Bhupendra Kesur
2013-01-01
Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.
Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186
Frothingham, David; Barker, Michelle; Buechi, Steve [U.S. Army Corps of Engineers Buffalo District, 1776 Niagara St., Buffalo, NY 14207 (United States); Durham, Lisa [Argonne National Laboratory, Environmental Science Division, 9700 S. Cass Ave., Argonne, IL 60439 (United States)
2013-07-01
Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recovery and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil
Shieh, Gwowen; Jan, Show-Li
2015-01-01
The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…
Bunning, Harriet; Bassett, Lee; Clowser, Christina; Rapkin, James; Jensen, Kim; House, Clarissa M; Archer, Catharine R; Hunt, John
2016-07-01
Sexual selection may cause dietary requirements for reproduction to diverge across the sexes and promote the evolution of different foraging strategies in males and females. However, our understanding of how the sexes regulate their nutrition and the effects that this has on sex-specific fitness is limited. We quantified how protein (P) and carbohydrate (C) intakes affect reproductive traits in male (pheromone expression) and female (clutch size and gestation time) cockroaches (Nauphoeta cinerea). We then determined how the sexes regulate their intake of nutrients when restricted to a single diet and when given dietary choice and how this affected expression of these important reproductive traits. Pheromone levels that improve male attractiveness, female clutch size and gestation time all peaked at a high daily intake of P:C in a 1:8 ratio. This is surprising because female insects typically require more P than males to maximize reproduction. The relatively low P requirement of females may reflect the action of cockroach endosymbionts that help recycle stored nitrogen for protein synthesis. When constrained to a single diet, both sexes prioritized regulating their daily intake of P over C, although this prioritization was stronger in females than males. When given the choice between diets, both sexes actively regulated their intake of nutrients at a 1:4.8 P:C ratio. The P:C ratio did not overlap exactly with the intake of nutrients that optimized reproductive trait expression. Despite this, cockroaches of both sexes that were given dietary choice generally improved the mean and reduced the variance in all reproductive traits we measured relative to animals fed a single diet from the diet choice pair. This pattern was not as strong when compared to the single best diet in our geometric array, suggesting that the relationship between nutrient balancing and reproduction is complex in this species.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad R.; Okou, Cédric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Modelling volatility by variance decomposition
Amado, Cristina; Teräsvirta, Timo
on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....
Reducing sample complexity of polyclonal human autoantibodies by chromatofocusing.
Hagemann, Sascha; Faude, Alexander; Rabenstein, Monika; Balzer-Geldsetzer, Monika; Nölker, Carmen; Bacher, Michael; Dodel, Richard
2010-08-15
Chromatofocusing was performed in order to separate a polyclonal antigen-specific mixture of human immunoglobulins (IgGs) that would then allow for further analyses of as few different IgGs as possible. Because polyclonal IgGs only differ by amino acid sequence and possible post-translational modifications but not by molecular weight, we chose chromatofocusing for protein separation by different isoelectric points. We isolated antigen-specific IgGs from commercially available intravenous immunoglobulins (IVIG) using a combination of affinity- and size exclusion-chromatography and in order to reduce the complexity of the starting material IVIG was then replaced by single-donor plasmapheresis material. Using two-dimensional gel electrophoresis (2-DE), we observed a clear decrease in the number of different light and heavy chains in the chromatofocusing peak as compared to the starting material. In parallel, we monitored slight problems with the selected peak in isoelectric focusing as the first dimension of 2-DE, displayed in by the less proper focusing of the spots. When we tested whether IgGs were binding to their specific antigen after chromatofocusing, we were able to show that they were still in native conformation. In conclusion, we showed that chromatofocusing can be used as a first step in the analysis of mixtures of very similar proteins, e.g. polyclonal IgG preparations, in order to minimize the amount of different proteins in separated fractions in a reproducible way. Copyright 2010 Elsevier B.V. All rights reserved.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. © 2011, The International Biometric Society.
Reducing Spatial Heterogeneity of MALDI Samples with Marangoni Flows During Sample Preparation
Lai, Yin-Hung; Cai, Yi-Hong; Lee, Hsun; Ou, Yu-Meng; Hsiao, Chih-Hao; Tsao, Chien-Wei; Chang, Huan-Tsung; Wang, Yi-Sheng
2016-08-01
This work demonstrates a method to prepare homogeneous distributions of analytes to improve data reproducibility in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS). Natural-air drying processes normally result in unwanted heterogeneous spatial distributions of analytes in MALDI crystals and make quantitative analysis difficult. This study demonstrates that inducing Marangoni flows within drying droplets can significantly reduce the heterogeneity problem. The Marangoni flows are accelerated by changing substrate temperatures to create temperature gradients across droplets. Such hydrodynamic flows are analyzed semi-empirically. Using imaging mass spectrometry, changes of heterogeneity of molecules with the change of substrate temperature during drying processes are demonstrated. The observed heterogeneities of the biomolecules reduce as predicted Marangoni velocities increase. In comparison to conventional methods, drying droplets on a 5 °C substrate while keeping the surroundings at ambient conditions typically reduces the heterogeneity of biomolecular ions by 65%-80%. The observation suggests that decreasing substrate temperature during droplet drying processes is a simple and effective means to reduce analyte heterogeneity for quantitative applications.
National Center for Education Statistics (DHEW), Washington, DC.
A complex two-stage sample selection process was used in designing the National Longitudinal Study of the High School Class of 1972. The first-stage sampling frame used in the selection of schools was stratified by the following seven variables: public vs. private control, geographic region, grade 12 enrollment, proximity to institutions of higher…
Volkova, V. (Valeriya); Tronina, A.; Pogorelova, T.
2010-01-01
The distribution of the Hsu test statistic has been investigated in case when distributions of observed random variables di er from the normal law by methods of statistical simulation. The limiting statistic distributions have been approxi- mated for a number observation distribution laws. The investigation of Bartels and Wald-Wolfowitz test statistic distributions has been carried out in the case of the limited sample sizes.
Genetic heterogeneity of residual variance in broiler chickens
Hill William G
2006-11-01
Full Text Available Abstract Aims were to estimate the extent of genetic heterogeneity in environmental variance. Data comprised 99 535 records of 35-day body weights from broiler chickens reared in a controlled environment. Residual variance within dam families was estimated using ASREML, after fitting fixed effects such as genetic groups and hatches, for each of 377 genetically contemporary sires with a large number of progeny (> 100 males or females each. Residual variance was computed separately for male and female offspring, and after correction for sampling, strong evidence for heterogeneity was found, the standard deviation between sires in within variance amounting to 15–18% of its mean. Reanalysis using log-transformed data gave similar results, and elimination of 2–3% of outlier data reduced the heterogeneity but it was still over 10%. The correlation between estimates for males and females was low, however. The correlation between sire effects on progeny mean and residual variance for body weight was small and negative (-0.1. Using a data set bigger than any yet presented and on a trait measurable in both sexes, this study has shown evidence for heterogeneity in the residual variance, which could not be explained by segregation of major genes unless very few determined the trait.
Alternaria and Fusarium in Norwegian grains of reduced quality - a matched pair sample study
Kosiak, B.; Torp, M.; Skjerve, E.;
2004-01-01
The occurrence and geographic distribution of species belonging to the genera Alternaria and Fusarium in grains of reduced and of acceptable quality were studied post-harvest in 1997 and 1998. A total of 260 grain samples of wheat, barley and oats was analysed. The distribution of Alternaria...... and Fusarium spp. varied significantly in samples of reduced quality compared with acceptable samples. Alternaria spp. dominated in the acceptable samples with A. infectoria group as the most frequently isolated and most abundant species group of this genus while Fusarium spp. dominated in samples of reduced...... quality. The most frequently isolated Fusarium spp. from all samples were F avenaceum, E poae, F culmorum and E tricinctum. Other important toxigenic Fusarium spp. isolated were F graminearum and E equiseti. The infection levels of F graminearum and F culmorunt were significantly higher in the samples...
Influence of Family Structure on Variance Decomposition
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
Influence of Family Structure on Variance Decomposition
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
Kisielowski, C; Frei, H; Specht, P; Sharp, I D; Haber, J A; Helveg, S
2017-01-01
This article summarizes core aspects of beam-sample interactions in research that aims at exploiting the ability to detect single atoms at atomic resolution by mid-voltage transmission electron microscopy. Investigating the atomic structure of catalytic Co3O4 nanocrystals underscores how indispensable it is to rigorously control electron dose rates and total doses to understand native material properties on this scale. We apply in-line holography with variable dose rates to achieve this goal. Genuine object structures can be maintained if dose rates below ~100 e/Å(2)s are used and the contrast required for detection of single atoms is generated by capturing large image series. Threshold doses for the detection of single atoms are estimated. An increase of electron dose rates and total doses to common values for high resolution imaging of solids stimulates object excitations that restructure surfaces, interfaces, and defects and cause grain reorientation or growth. We observe a variety of previously unknown atom configurations in surface proximity of the Co3O4 spinel structure. These are hidden behind broadened diffraction patterns in reciprocal space but become visible in real space by solving the phase problem. An exposure of the Co3O4 spinel structure to water vapor or other gases induces drastic structure alterations that can be captured in this manner.
Conversations across Meaning Variance
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Discrimination of frequency variance for tonal sequencesa)
Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.
2014-01-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTA...
Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling
Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas
2013-01-01
This paper proposes a method that reduces the computational complexity of signal reconstruction in single-channel nonuniform sampling while acquiring frequency sparse multi-band signals. Generally, this compressed sensing based signal acquisition allows a decrease in the sampling rate of frequency...
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Zhou, Hao
We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia...... predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Tone, Kazuya; Fujisaki, Ryuichi; Yamazaki, Takashi; Makimura, Koichi
2017-01-01
Loop-mediated isothermal amplification (LAMP) is widely used for differentiating causative agents in infectious diseases. Melting curve analysis (MCA) in conjunction with the LAMP method reduces both the labor required to conduct an assay and contamination of the products. However, two factors influence the melting temperature (Tm) of LAMP products: an inconsistent concentration of Mg(2+) ion due to the precipitation of Mg2P2O7, and the guanine-cytosine (GC) content of the starting dumbbell-like structure. In this study, we investigated the influence of inorganic pyrophosphatase (PPase), an enzyme that inhibits the production of Mg2P2O7, on the Tm of LAMP products, and examined the correlation between the above factors and the Tm value using MCA. A set of LAMP primers that amplify the ribosomal DNA of the large subunit of Aspergillus fumigatus, Penicillium expansum, Penicillium marneffei, and Histoplasma capsulatum was designed, and the LAMP reaction was performed using serial concentrations of these fungal genomic DNAs as templates in the presence and absence of PPase. We compared the Tm values obtained from the PPase-free group and the PPase-containing group, and the relationship between the GC content of the theoretical starting dumbbell-like structure and the Tm values of the LAMP product from each fungus was analyzed. The range of Tm values obtained for several fungi overlapped in the PPase-free group. In contrast, in the PPase-containing group, the variance in Tm values was smaller and there was no overlap in the Tm values obtained for all fungi tested: the LAMP product of each fungus had a specific Tm value, and the average Tm value increased as the GC% of the starting dumbbell-like structure increased. The use of PPase therefore reduced the variance in the Tm value and allowed the differentiation of these pathogenic fungi using the MCA method.
Career satisfaction and retention of a sample of women physicians who work reduced hours.
Barnett, Rosalind C; Gareis, Karen C; Carr, Phyllis L
2005-03-01
To better understand the career satisfaction and factors related to retention of women physicians who work reduced hours and are in dual-earner couples in comparison to their full-time counterparts. Survey of a random sample of female physicians between 25 and 50 years of age working within 25 miles of Boston, whose names were obtained from the Board of Registration in Medicine in Massachusetts. Interviewers conducted a 60-minute face-to-face closed-ended interview after interviewees completed a 20-minute mailed questionnaire. Fifty-one full-time physicians and 47 reduced hours physicians completed the study; the completion rate was 49.5%. The two groups were similar in age, years as a physician, mean household income, number of children, and presence of an infant in the home. Reduced hours physicians in this sample had a different relationship to experiences in the family than full-time physicians. (1) When reduced hours physicians had low marital role quality, there was an associated lower career satisfaction; full-time physicians report high career satisfaction regardless of their marital role quality. (2) When reduced hours physicians had low marital role or parental role quality, there was an associated higher intention to leave their jobs than for full-time physicians; when marital role or parental role quality was high, there was an associated lower intention to leave their jobs than for full-time physicians. (3) When reduced hours physicians perceived that work interfering with family was high, there was an associated greater intention to leave their jobs that was not apparent for full-time physicians. Women physicians in this sample who worked reduced hours had stronger relationships between family experiences (marital and parental role quality and work interference with family) and professional outcomes than had their full-time counterparts. Both career satisfaction and intention to leave their employment are correlated with the quality of home life for
Elzerbi, Catherine; Donoghue, Kim; Boniface, Sadie; Drummond, Colin
2017-06-29
We adopt a comparative framework to measure the extent to which variance in the efficacy of alcohol brief interventions to reduce hazardous and harmful drinking at less than or equal to 5-, 6-, and 12-month follow-up in emergency department settings can be determined by differences between study populations (targeted injury and noninjury specific). A systematic review and meta-analysis of randomized controlled trials published before September 2016 was undertaken. Twenty-three high-quality and methodologically similar randomized controlled trials were eligible, with a total number of 15,173 participants included. Primary outcome measure was efficacy of brief intervention compared with a control group in reducing quantity of alcohol consumed. An inverse variance model was applied to measure the effect of treatment in standard mean differences for brief intervention and control groups. At 6-month follow-up, an effect in favor of brief intervention over control was identified for targeted injury studies (standardized mean difference=-0.10; 95% confidence interval [CI] -0.17 to -0.02; I(2)=0%). For pooled noninjury-specific studies, small benefits of brief intervention were evident at less than or equal to 5-month follow-up (standardized mean difference=-0.15; 95% CI -0.24 to -0.07; I(2)=0%), at 6-month follow-up (standardized mean difference=-0.08; 95% CI -0.14 to -0.01; I(2)=1%), and at 12-month follow-up (standardized mean difference=-0.08; 95% CI -0.15 to -0.01; I(2)=0%). Meta-analysis identified noninjury-specific studies as associated with better response to brief intervention than targeted injury studies. However, the inclusion of injured patients with noninjured ones in the experimental and control groups of noninjury-specific studies limited the interpretation of this finding. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Reducing sample size in experiments with animals: historical controls and related strategies.
Kramer, Matthew; Font, Enrique
2017-02-01
Reducing the number of animal subjects used in biomedical experiments is desirable for ethical and practical reasons. Previous reviews of the benefits of reducing sample sizes have focused on improving experimental designs and methods of statistical analysis, but reducing the size of control groups has been considered rarely. We discuss how the number of current control animals can be reduced, without loss of statistical power, by incorporating information from historical controls, i.e. subjects used as controls in similar previous experiments. Using example data from published reports, we describe how to incorporate information from historical controls under a range of assumptions that might be made in biomedical experiments. Assuming more similarities between historical and current controls yields higher savings and allows the use of smaller current control groups. We conducted simulations, based on typical designs and sample sizes, to quantify how different assumptions about historical controls affect the power of statistical tests. We show that, under our simulation conditions, the number of current control subjects can be reduced by more than half by including historical controls in the analyses. In other experimental scenarios, control groups may be unnecessary. Paying attention to both the function and to the statistical requirements of control groups would result in reducing the total number of animals used in experiments, saving time, effort and money, and bringing research with animals within ethically acceptable bounds. © 2015 Cambridge Philosophical Society.
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Neutrino mass without cosmic variance
LoVerde, Marilena
2016-01-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...
Shot noise reduced terahertz detection via spectrally post-filtered electro-optic sampling
Porer, Michael; Huber, Rupert
2016-01-01
In ultrabroadband terahertz electro-optic sampling, spectral filtering of the gate pulse can strongly reduce the quantum noise while the signal level is only weakly affected. The concept is tested for phase-matched electro-optic detection of field transients centered at 45 THz with 12-fs near-infrared gate pulses in AgGaS2. Our new approach increases the experimental signal-to-noise ratio by a factor of 3 compared to standard electro-optic sampling. Under certain conditions an improvement factor larger than 5 is predicted by our theoretical analysis.
Improving mid stream urine sampling: reducing labelling error and laboratory rejection.
Jakes, Adam; McCue, Eleanor; Cracknell, Alison
2014-01-01
A urine sample is vital in older patients with pyrexia or acute confusion, and commonly directs clinicians towards a source of infection. Not only can the organism be identified, but sensitivities to antibiotics can also guide prescribing. A high number of urine samples were not being processed on the medicine for older people wards at St. James's Hospital due to incomplete hand-written request forms not complying with trust policy. Previous attempts to re-educate staff had failed to improve acceptance rates. Rejected samples delay diagnosis, identification of organisms and subsequent sensitivities, as well as increasing staff workload. A total of 72 urine samples were audited from our wards in March 2013; 12 (17%) rejected. Clinicians were notified of rejected samples within one to four days. An electronic-requesting system was implemented in April 2013. Once implemented, a further two data collection cycles of 72 urine samples were completed from the same wards. In December 2013, 55 (76%) were electronically requested and 17 (24%) hand-written. Four (5%) samples were rejected and were all hand-written. In August 2014, 61 (85%) were electronically requested and 11 (15%) hand-written. No samples were rejected. The electronic-requesting system has effectively reduced the number of rejected urine samples. No electronically requested samples were rejected, therefore 100% sample acceptance is achievable. It is more effective than re-educating staff alone and ensures requests meet trust policy. Clinicians were notified of a samples rejection after one to four days. By this time patients may have started antibiotic therapy, decreasing the likelihood of isolating the causative organism in subsequent samples. All urine samples requested must meet a high standard and comply with trust policy in order to be processed. An electronic-requesting system removes errors of omission and ensures policy compliance, ultimately leading to improved patient care. Now our processes are
Wolicka, Dorota; Zdanowski, Marek K; Żmuda-Baranowska, Magdalena J; Poszytek, Anna; Grzesiak, Jakub
2014-01-01
We determined sulphate-reducing activities in media inoculated with soils and with kettle lake sediments in order to investigate their potential in geomicrobiological processes in low-temperature, terrestrial maritime Antarctic habitats. Soil and sediment samples were collected in a glacier valley abandoned by Ecology Glacier during the last 30 years: from a new formed kettle lake sediment and forefield soil derived from ground moraine. Inoculated with these samples, liquid Postgate C and minimal media supplemented with various carbon sources as electron donors were incubated for 8 weeks at 4°C. High rates of sulphate reduction were observed only in media inoculated with soil. No sulphate reduction was detected in media inoculated with kettle lake sediments. In soil samples culture media calcite and elemental sulphur deposits were observed, demonstrating that sulphate-reducing activity is associated with a potential to mineral formation in cold environments. Cells observed on scanning microscopy (SEM) micrographs of post-culture-soil deposits could be responsible for sulphate-reducing activity.
Discrimination of frequency variance for tonal sequences.
Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A
2014-12-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) > σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.
{sup 10}Be measurements at MALT using reduced-size samples of bulk sediments
Horiuchi, Kazuho, E-mail: kh@cc.hirosaki-u.ac.jp [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Oniyanagi, Itsumi [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Wasada, Hiroshi [Institute of Geology and Paleontology, Graduate school of Science, Tohoku University, 6-3, Aramaki Aza-Aoba, Aoba-ku, Sendai 980-8578 (Japan); Matsuzaki, Hiroyuki [MALT, School of Engineering, University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-0032 (Japan)
2013-01-15
In order to establish {sup 10}Be measurements on reduced-size (1-10 mg) samples of bulk sediments, we investigated four different pretreatment designs using lacustrine and marginal-sea sediments and the AMS system of the Micro Analysis Laboratory, Tandem accelerator (MALT) at University of Tokyo. The {sup 10}Be concentrations obtained from the samples of 1-10 mg agreed within a precision of 3-5% with the values previously determined using corresponding ordinary-size ({approx}200 mg) samples and the same AMS system. This fact demonstrates reliable determinations of {sup 10}Be with milligram levels of recent bulk sediments at MALT. On the other hand, a clear decline of the BeO{sup -} beam with tens of micrograms of {sup 9}Be carrier suggests that the combination of ten milligrams of sediments and a few hundred micrograms of the {sup 9}Be carrier is more convenient at this stage.
The Variance Composition of Firm Growth Rates
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
Selbig, William R.
2017-01-01
Collection of water-quality samples that accurately characterize average particle concentrations and distributions in channels can be complicated by large sources of variability. The U.S. Geological Survey (USGS) developed a fully automated Depth-Integrated Sample Arm (DISA) as a way to reduce bias and improve accuracy in water-quality concentration data. The DISA was designed to integrate with existing autosampler configurations commonly used for the collection of water-quality samples in vertical profile thereby providing a better representation of average suspended sediment and sediment-associated pollutant concentrations and distributions than traditional fixed-point samplers. In controlled laboratory experiments, known concentrations of suspended sediment ranging from 596 to 1,189 mg/L were injected into a 3 foot diameter closed channel (circular pipe) with regulated flows ranging from 1.4 to 27.8 ft3 /s. Median suspended sediment concentrations in water-quality samples collected using the DISA were within 7 percent of the known, injected value compared to 96 percent for traditional fixed-point samplers. Field evaluation of this technology in open channel fluvial systems showed median differences between paired DISA and fixed-point samples to be within 3 percent. The range of particle size measured in the open channel was generally that of clay and silt. Differences between the concentration and distribution measured between the two sampler configurations could potentially be much larger in open channels that transport larger particles, such as sand.
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
Low vacuum and discard tubes reduce hemolysis in samples drawn from intravenous catheters.
Heiligers-Duckers, Connie; Peters, Nathalie A L R; van Dijck, Jose J P; Hoeijmakers, Jan M J; Janssen, Marcel J W
2013-08-01
In-vitro hemolysis is a great challenge to emergency departments where blood is drawn from intravenous catheters (IVCs). Although high quality samples can be obtained by straight needle venipuncture, IVCs are preferred for various reasons. The aim of this study was to identify blood collection practices that reduce hemolysis while using IVC. The study was conducted at an emergency department where blood is drawn in ≥ 90% of patients from IVC. Hemolysis, measured spectrophotometrically, was compared between syringe and vacuum tubes. The following practices were tested in combination with vacuum collection; a Luer-slip adapter, a Luer-lock adapter, discard tubes and low vacuum tubes. Each intervention lasted 1 week and retrieved 154 to 297 samples. As reference, hemolysis was also measured in vacuum tubes retrieved from departments where only straight needle venipuncture is performed. Vacuum collection led to more hemolytic samples compared with syringe tubes (24% versus 16% respectively, p=0.008). No difference in hemolysis was observed between the Luer-slip and the Luer-lock adapter. The use of discard (17% hemolytic, p=0.045) and low vacuum tubes (12% hemolytic, pvacuum tubes reduce hemolysis while drawing blood from IVC. Of these practices the use of a low vacuum tube is preferred considering the less volume of blood and the amount of tubes drawn. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Lefsky, M. A.; Ramond, T.; Weimer, C. S.
2010-12-01
Current and proposed spaceborne lidar sensors sample the land surface using observations along transects in which consecutive observations in the along-track dimension are either contiguous (e.g. VCL, DESDynI, Livex) or spaced (ICESat). These sampling patterns are inefficient because multiple observations are made of a spatially autocorrelated phenomenon (i.e. vegetation patches) while large areas of the landscape are left un-sampled. This results in higher uncertainty in estimates of average ecosystem structure than would be obtained using either random sampling or sampling in regular grids. We compared three sampling scenarios for spaceborne lidar: five transects spaced every 850 m across-track with contiguous 25m footprints along-track, the same number of footprints distributed randomly, and a hybrid approach that retains the central transect of contiguous 25m footprints and distributes the remainder of the footprints into a grid with 178 m spacing. We used simulated ground tracks at four latitudes for a realistic spaceborne lidar mission and calculated the amount of time required to achieve 150 m spacing between transects and the number of near-coincident observations for each scenario. We used four lidar height datasets collected using the Laser Vegetation Imaging Sensor (La Selva, Costa Rica, Sierra Nevada, California, Duke Forest, North Carolina and Harvard Forest, Massachusetts) to calculate the standard error of estimates of landscape height for each scenario. We found that a hybrid sampling approach reduced the amount of time required to reach a transect spacing of 150 m by a factor of three at all four latitudes, and that the number of near-coincident observations was greater by a factor of five at the equator and at least equal throughout the range of latitudes sampled. The standard error of landscape height was between 2 and 2.5 times smaller using either hybrid or random sampling than using transect sampling. As the pulses generated by a spaceborne
Wang, Wei; Zhuge, Qunbi; Morsy-Osman, Mohamed; Gao, Yuliang; Xu, Xian; Chagnon, Mathieu; Qiu, Meng; Hoang, Minh Thang; Zhang, Fangyuan; Li, Rui; Plant, David V
2014-11-03
We propose a decision-aided algorithm to compensate the sampling frequency offset (SFO) between the transmitter and receiver for reduced-guard-interval (RGI) coherent optical (CO) OFDM systems. In this paper, we first derive the cyclic prefix (CP) requirement for preventing OFDM symbols from SFO induced inter-symbol interference (ISI). Then we propose a new decision-aided SFO compensation (DA-SFOC) algorithm, which shows a high SFO tolerance and reduces the CP requirement. The performance of DA-SFOC is numerically investigated for various situations. Finally, the proposed algorithm is verified in a single channel 28 Gbaud polarization division multiplexing (PDM) RGI CO-OFDM experiment with QPSK, 8 QAM and 16 QAM modulation formats, respectively. Both numerical and experimental results show that the proposed DA-SFOC method is highly robust against the standard SFO in optical fiber transmission.
Multiclass classification of microarray data samples with a reduced number of genes
Ornella Leonardo
2011-02-01
Full Text Available Abstract Background Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained. Results A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples. Conclusions A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples.
Employing components-of-variance to evaluate forensic breath test instruments.
Gullberg, Rod G
2008-03-01
The evaluation of breath alcohol instruments for forensic suitability generally includes the assessment of accuracy, precision, linearity, blood/breath comparisons, etc. Although relevant and important, these methods fail to evaluate other important analytical and biological components related to measurement variability. An experimental design comparing different instruments measuring replicate breath samples from several subjects is presented here. Three volunteers provided n = 10 breath samples into each of six different instruments within an 18 minute time period. Two-way analysis of variance was employed which quantified the between-instrument effect and the subject/instrument interaction. Variance contributions were also determined for the analytical and biological components. Significant between-instrument and subject/instrument interaction were observed. The biological component of total variance ranged from 56% to 98% among all subject instrument combinations. Such a design can help quantify the influence of and optimize breath sampling parameters that will reduce total measurement variability and enhance overall forensic confidence.
Lorenz, Matthew A; Burant, Charles F; Kennedy, Robert T
2011-05-01
A simple, fast, and reproducible sample preparation procedure was developed for relative quantification of metabolites in adherent mammalian cells using the clonal β-cell line INS-1 as a model sample. The method was developed by evaluating the effect of different sample preparation procedures on high performance liquid chromatography- mass spectrometry quantification of 27 metabolites involved in glycolysis and the tricarboxylic acid cycle on a directed basis as well as for all detectable chromatographic features on an undirected basis. We demonstrate that a rapid water rinse step prior to quenching of metabolism reduces components that suppress electrospray ionization thereby increasing signal for 26 of 27 targeted metabolites and increasing total number of detected features from 237 to 452 with no detectable change of metabolite content. A novel quenching technique is employed which involves addition of liquid nitrogen directly to the culture dish and allows for samples to be stored at -80 °C for at least 7 d before extraction. Separation of quenching and extraction steps provides the benefit of increased experimental convenience and sample stability while maintaining metabolite content similar to techniques that employ simultaneous quenching and extraction with cold organic solvent. The extraction solvent 9:1 methanol: chloroform was found to provide superior performance over acetonitrile, ethanol, and methanol with respect to metabolite recovery and extract stability. Maximal recovery was achieved using a single rapid (∼1 min) extraction step. The utility of this rapid preparation method (∼5 min) was demonstrated through precise metabolite measurements (11% average relative standard deviation without internal standards) associated with step changes in glucose concentration that evoke insulin secretion in the clonal β-cell line INS-1.
Two-dimensional finite-element temperature variance analysis
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Mariyaselvam, M Z; Heij, R E; Laba, D; Richardson, J A; Hodges, E J; Maduakor, C A; Carter, J J; Young, P J
2015-01-01
Arterial cannulation is associated with complications including bacterial contamination, accidental intra-arterial injection and blood spillage. We performed a series of audits and experiments to gauge the potential for these, as well as assess the possible contribution of a new device, the Needle-Free Arterial Non-Injectable Connector (NIC), in reducing these risks. The NIC comprises a needle-free connector that prevents blood spillage and a one-way valve allowing aspiration only; once screwed onto the side port of a three-way tap, the device can only be removed with difficulty. We performed a clinical audit of arterial monitoring systems in our intensive care unit, which showed an incidence of bacterial colonisation of five in 86 (6%) three-way tap ports. We constructed a manikin simulation experiment of the management of acute bradycardia, in which trainee doctors were required to inject atropine intravenously. Ten of 15 (66%) doctors injected the drug into the three-way tap of the arterial monitoring system rather than into the intravenous cannula or the central venous catheter. In a laboratory study, we replicated the arterial blood sampling and flushing sequence from a three-way tap, with the syringes attached either directly to the three-way tap port or to a NIC attached to the port. The first (discard) syringe attached to the three-way tap was contaminated with bacteria. Bacterial growth was found in 17 of 20 (85%) downstream flushed samples (corresponding to the patient's circulation) when the three-way tap was accessed directly, compared to none of 20 accessed via the NIC (p contaminating sampling lines. As its design also prevents accidental intra-arterial injection, we suggest that it can reduce complications of arterial monitoring. © 2014 The Association of Anaesthetists of Great Britain and Ireland.
Statistical inference on variance components
Verdooren, L.R.
1988-01-01
In several sciences but especially in animal and plant breeding, the general mixed model with fixed and random effects plays a great role. Statistical inference on variance components means tests of hypotheses about variance components, constructing confidence intervals for them, estimating them,
Selbig, William R; Cox, Amanda; Bannerman, Roger T
2012-04-01
A new water sample collection system was developed to improve representation of solids entrained in urban stormwater by integrating water-quality samples from the entire water column, rather than a single, fixed point. The depth-integrated sample arm (DISA) was better able to characterize suspended-sediment concentration and particle size distribution compared to fixed-point methods when tested in a controlled laboratory environment. Median suspended-sediment concentrations overestimated the actual concentration by 49 and 7% when sampling the water column at 3- and 4-points spaced vertically throughout the water column, respectively. Comparatively, sampling only at the bottom of the pipe, the fixed-point overestimated the actual concentration by 96%. The fixed-point sampler also showed a coarser particle size distribution compared to the DISA which was better able to reproduce the average distribution of particles in the water column over a range of hydraulic conditions. These results emphasize the need for a water sample collection system that integrates the entire water column, rather than a single, fixed point to properly characterize the concentration and distribution of particles entrained in stormwater pipe flow.
Field and Lab Methods to Reduce Sampling Variation in Soil Carbon
Mattson, K. G.; Zhang, J.
2015-12-01
Natural variability in soil and detrital carbon sampling is typically large enough that it hinders accurate assessment of standing stock and changes that may occur following disturbances and experimental treatments. We are developing carbon budgets in forests of Northern California and wish to see how experimental canopy thinning may affect carbon cycling in these forests. In the pre-treatment phase, we have sought methods to quantify detrital carbon pools in an accurate and efficient manner. We have found that small soil excavations 15 cm diameter to a depth of 10 cm work very well to reduce variation an avoid introducing sampling biases. We excavate a pit carefully of uniform dimensions using cutting chisels and scoops. We fill the void created using small pebbles contained in a small net and then weigh the pebbles to obtain a volume estimate of the soil collected. The samples are sorted moist through a series of sieves of 6, 4, and 2 mm into rocks, live roots, dead roots, woody debris, and remaining soil and its organic matter. From a single sample, we estimate proportional rock volume, fine soil bulk density (soil bulk density of the 2 mm fraction), live roots, dead roots, woody debris, and proportion of organic matter in the 2 mm fraction. The standard deviations of soil measures (soil carbon, loss on ignition, bulk density, rock volume, live and dead root mass) were universally reduced over similar measures by soil corers, in some instances by up to 5-fold. Coefficient of variation using excavation pits are typically 5 to 10 %, whereas cores were 20 to 30 %. We have observed that variation in soil organic matter is more a function of variation in soil bulk density than with variation in percent soil organic matter content. As a result, we often see increased soil organic matter stores at depths below 10 cm. Soils beneath highly decayed logs show increases in soil carbon in the mineral soil suggesting woody debris is a source of soil carbon. Below
Markov bridges, bisection and variance reduction
Asmussen, Søren; Hobolth, Asger
Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...... where the methods of stratification, importance sampling and quasi Monte Carlo are investigated....
Li, Roger W; Brown, Brian; Edwards, Marion H; Ngo, Charlie V; Chat, Sandy W; Levi, Dennis M
2012-01-01
Vernier acuity, a form of visual hyperacuity, is amongst the most precise forms of spatial vision. Under optimal conditions Vernier thresholds are much finer than the inter-photoreceptor distance. Achievement of such high precision is based substantially on cortical computations, most likely in the primary visual cortex. Using stimuli with added positional noise, we show that Vernier processing is reduced with advancing age across a wide range of noise levels. Using an ideal observer model, we are able to characterize the mechanisms underlying age-related loss, and show that the reduction in Vernier acuity can be mainly attributed to the reduction in efficiency of sampling, with no significant change in the level of internal position noise, or spatial distortion, in the visual system.
Genomic variance estimates: With or without disequilibrium covariances?
Lehermeier, C; de Los Campos, G; Wimmer, V; Schön, C-C
2017-06-01
Whole-genome regression methods are often used for estimating genomic heritability: the proportion of phenotypic variance that can be explained by regression on marker genotypes. Recently, there has been an intensive debate on whether and how to account for the contribution of linkage disequilibrium (LD) to genomic variance. Here, we investigate two different methods for genomic variance estimation that differ in their ability to account for LD. By analysing flowering time in a data set on 1,057 fully sequenced Arabidopsis lines with strong evidence for diversifying selection, we observed a large contribution of covariances between quantitative trait loci (QTL) to the genomic variance. The classical estimate of genomic variance that ignores covariances underestimated the genomic variance in the data. The second method accounts for LD explicitly and leads to genomic variance estimates that when added to error variance estimates match the sample variance of phenotypes. This method also allows estimating the covariance between sets of markers when partitioning the genome into subunits. Large covariance estimates between the five Arabidopsis chromosomes indicated that the population structure in the data led to strong LD also between physically unlinked QTL. By consecutively removing population structure from the phenotypic variance using principal component analysis, we show how population structure affects the magnitude of LD contribution and the genomic variance estimates obtained with the two methods. © 2017 Blackwell Verlag GmbH.
About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants (CCFAC) began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in alm...
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
MicroRNA buffering and altered variance of gene expression in response to Salmonella infection.
Bao, Hua; Kommadath, Arun; Plastow, Graham S; Tuggle, Christopher K; Guan, Le Luo; Stothard, Paul
2014-01-01
One potential role of miRNAs is to buffer variation in gene expression, although conflicting results have been reported. To investigate the buffering role of miRNAs in response to Salmonella infection in pigs, we sequenced miRNA and mRNA in whole blood from 15 pig samples before and after Salmonella challenge. By analyzing inter-individual variation in gene expression patterns, we found that for moderately and lowly expressed genes, putative miRNA targets showed significantly lower expression variance compared with non-miRNA-targets. Expression variance between highly expressed miRNA targets and non-miRNA-targets was not significantly different. Further, miRNA targets demonstrated significantly reduced variance after challenge whereas non-miRNA-targets did not. RNA binding proteins (RBPs) are significantly enriched among the miRNA targets with dramatically reduced variance of expression after Salmonella challenge. Moreover, we found evidence that targets of young (less-conserved) miRNAs showed lower expression variance compared with targets of old (evolutionarily conserved) miRNAs. These findings point to the importance of a buffering effect of miRNAs for relatively lowly expressed genes, and suggest that the reduced expression variation of RBPs may play an important role in response to Salmonella infection.
MicroRNA buffering and altered variance of gene expression in response to Salmonella infection.
Hua Bao
Full Text Available One potential role of miRNAs is to buffer variation in gene expression, although conflicting results have been reported. To investigate the buffering role of miRNAs in response to Salmonella infection in pigs, we sequenced miRNA and mRNA in whole blood from 15 pig samples before and after Salmonella challenge. By analyzing inter-individual variation in gene expression patterns, we found that for moderately and lowly expressed genes, putative miRNA targets showed significantly lower expression variance compared with non-miRNA-targets. Expression variance between highly expressed miRNA targets and non-miRNA-targets was not significantly different. Further, miRNA targets demonstrated significantly reduced variance after challenge whereas non-miRNA-targets did not. RNA binding proteins (RBPs are significantly enriched among the miRNA targets with dramatically reduced variance of expression after Salmonella challenge. Moreover, we found evidence that targets of young (less-conserved miRNAs showed lower expression variance compared with targets of old (evolutionarily conserved miRNAs. These findings point to the importance of a buffering effect of miRNAs for relatively lowly expressed genes, and suggest that the reduced expression variation of RBPs may play an important role in response to Salmonella infection.
Generalized analysis of molecular variance.
Caroline M Nievergelt
2007-04-01
Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by
A Broadband Beamformer Using Controllable Constraints and Minimum Variance
Karimian-Azari, Sam; Benesty, Jacob; Jensen, Jesper Rindom
2014-01-01
The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom...
Measuring proteins with greater speed and resolution while reducing sample size.
Hsieh, Vincent H; Wyatt, Philip J
2017-08-30
A multi-angle light scattering (MALS) system, combined with chromatographic separation, directly measures the absolute molar mass, size and concentration of the eluate species. The measurement of these crucial properties in solution is essential in basic macromolecular characterization and all research and production stages of bio-therapeutic products. We developed a new MALS methodology that has overcome the long-standing, stubborn barrier to microliter-scale peak volumes and achieved the highest resolution and signal-to-noise performance of any MALS measurement. The novel design simultaneously facilitates online dynamic light scattering (DLS) measurements. As National Institute of Standards and Technology (NIST) new protein standard reference material (SRM 8671) is becoming the benchmark molecule against which many biomolecular analytical techniques are assessed and evaluated, we present its measurement results as a demonstration of the unique capability of our system to swiftly resolve and measure sharp (20~25 µL full-width-half-maximum) chromatography peaks. Precise measurements of protein mass and size can be accomplished 10 times faster than before with improved resolution. In the meantime the sample amount required for such measurements is reduced commensurately. These abilities will have far-reaching impacts at every stage of the development and production of biologics and bio-therapeutic formulations.
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
Repetitive Cyclic Potentiodynamic Polarization Scan Results for Reduced Sample Volume Testing
LaMothe, Margaret E. [Washington River Protection Solutions, Richland, WA (United States)
2016-03-15
This report is the compilation of data gathered after repetitively testing simulated tank waste and a radioactive tank waste sample using a cyclic potentiodynamic polarization (CPP) test method to determine corrosion resistance of metal samples. Electrochemistry testing of radioactive tank samples is often used to assess the corrosion susceptibility and material integrity of waste tank steel. Repetitive testing of radiological tank waste is occasionally requested at 222-S Laboratory due to the limited volume of radiological tank sample received for testing.
Revision: Variance Inflation in Regression
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Bias-variance decomposition in Genetic Programming
Kowaliw Taras
2016-01-01
Full Text Available We study properties of Linear Genetic Programming (LGP through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a the variance between runs is primarily due to initialization rather than the selection of training samples, (b parameters can be reasonably optimized to obtain gains in efficacy, and (c functions detrimental to evolvability are easily eliminated, while functions well-suited to the problem can greatly improve performance—therefore, larger and more diverse function sets are always preferable.
TESTS FOR VARIANCE COMPONENTS IN VARYING COEFFICIENT MIXED MODELS
Zaixing Li; Yuedong Wang; Ping Wu; Wangli Xu; Lixing Zhu
2012-01-01
.... To address the question of whether a varying coefficient mixed model can be reduced to a simpler varying coefficient model, we develop one-sided tests for the null hypothesis that all the variance components are zero...
Reducing sampling error in faecal egg counts from black rhinoceros (Diceros bicornis).
Stringer, Andrew P; Smith, Diane; Kerley, Graham I H; Linklater, Wayne L
2014-04-01
Faecal egg counts (FECs) are commonly used for the non-invasive assessment of parasite load within hosts. Sources of error, however, have been identified in laboratory techniques and sample storage. Here we focus on sampling error. We test whether a delay in sample collection can affect FECs, and estimate the number of samples needed to reliably assess mean parasite abundance within a host population. Two commonly found parasite eggs in black rhinoceros (Diceros bicornis) dung, strongyle-type nematodes and Anoplocephala gigantea, were used. We find that collection of dung from the centre of faecal boluses up to six hours after defecation does not affect FECs. More than nine samples were needed to greatly improve confidence intervals of the estimated mean parasite abundance within a host population. These results should improve the cost-effectiveness and efficiency of sampling regimes, and support the usefulness of FECs when used for the non-invasive assessment of parasite abundance in black rhinoceros populations.
Analysis of variance: Comfortless questions
L.V. Nedorezov
2017-01-01
In this paper the simplest variant of analysis of variance is under consideration. Three examples from textbooks by Lakin (1990) and Rokitsky (1973) were re-considered. It was obtained that traditional one-way ANOVA and Kruskal - Wallis criterion can lead to unreal results about factor's influence on value of characteristics. Alternative way to solution of the same problem is under consideration too.
Analysis of Variance: Variably Complex
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
It is generally accepted that monitoring wells must be purged to access formation water to obtain “representative” ground water quality samples. Historically anywhere from 3 to 5 well casing volumes have been removed prior to sample collection to evacuate the standing well water...
Variance estimation in neutron coincidence counting using the bootstrap method
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Tiwari, P; Xie, Y; Chen, Y [Washington University in Saint Louis, Saint Louis, Missouri (United States); Deasy, J [Memorial Sloan Kettering Cancer Center, NY, NY (United States)
2014-06-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.
Minimum Variance Portfolios in the Brazilian Equity Market
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Variance based OFDM frame synchronization
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Lövenhag, Sara; Larm, Peter; Åslund, Cecilia; Nilsson, Kent W
2015-10-01
The aim of this study was to investigate possible effects of antisocial behavior on reducing the association between subdimensions of ADHD symptoms (inattention, hyperactivity and impulsivity) and alcohol use. Boys and girls were analyzed separately using a population-based Swedish adolescent sample. A randomly selected cross-sectional survey was performed in secondary and upper secondary schools in Västmanland County during 2010. Participants were a population of 2,439 15-16 year-olds and 1,425 17-18 year-olds (1,947 girls and 1,917 boys). Psychosocial adversity, antisocial behaviors, symptoms of ADHD and alcohol use were assessed by questionnaires. Except for girls' inattention, subdimensions of ADHD symptoms were not associated with alcohol use when variance due to antisocial behavior was accounted for. Among boys, instead of an indirect effect of antisocial behavior on the association between impulsivity and alcohol use, a moderating effect was found. Among girls, the inattention component of ADHD was independently associated with alcohol use even when adjusted for antisocial behavior. The reduced associations between symptoms of hyperactivity, impulsivity, and alcohol use for boys and girls after adjusting for antisocial behavior suggest a considerable overlap between hyperactivity, impulsivity, and antisocial behavior. The direct pathway between inattention and alcohol use among girls suggests that girls with inattention symptoms are at risk of alcohol use regardless of antisocial behavior. Special attention should be given to these girls. Accounting for antisocial behavior reduced the relation between subdimensions of ADHD symptoms and alcohol use, and antisocial behaviors should therefore be screened for when symptoms of ADHD are present.
Reduced Orbitofrontal and Temporal Grey Matter in a Community Sample of Maltreated Children
De Brito, Stephane A.; Viding, Essi; Sebastian, Catherine L.; Kelly, Philip A.; Mechelli, Andrea; Maris, Helen; McCrory, Eamon J.
2013-01-01
Background: Childhood maltreatment is strongly associated with increased risk of psychiatric disorder. Previous neuroimaging studies have reported atypical neural structure in the orbitofrontal cortex, temporal lobe, amygdala, hippocampus and cerebellum in maltreated samples. It has been hypothesised that these structural differences may relate to…
Reducing sample complexity in proteomics by chromatofocusing with simple buffer mixtures.
Shen, Hong; Li, Xiang; Bieberich, Charles J; Frey, Douglas D
2008-01-01
Chromatofocusing has many potential applications in the field of proteomics, such as for the isolation and removal of major sample components to facilitate the analysis of low-abundance components, and for sample prefractionation prior to a subsequent separation using SDS-PAGE, narrow-pI-range 2D-PAGE, or additional chromatography steps. However, the chromatofocusing techniques that are most commonly used employ propriety polyampholyte elution buffers and highly specialized column packings, both of which limit the use of chromatofocusing in practice. To expand the range of application for this technique, this chapter considers chromatofocusing methods which employ common ion-exchange column packings and elution buffers which are simple mixtures of readily available buffering species. Of particular interest is the use of chromatofocusing with a multistep pH gradient for the fractionation of protein mixtures into narrow-pI-range fractions. The cross-contamination characteristics of these fractions using SDS-PAGE are also assessed.
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
2012-01-01
Vernier acuity, a form of visual hyperacuity, is amongst the most precise forms of spatial vision. Under optimal conditions Vernier thresholds are much finer than the inter-photoreceptor distance. Achievement of such high precision is based substantially on cortical computations, most likely in the primary visual cortex. Using stimuli with added positional noise, we show that Vernier processing is reduced with advancing age across a wide range of noise levels. Using an ideal observer model, w...
Using Exclusion-Based Sample Preparation (ESP to Reduce Viral Load Assay Cost.
Scott M Berry
Full Text Available Viral load (VL measurements are critical to the proper management of HIV in developing countries. However, access to VL assays is limited by the high cost and complexity of existing assays. While there is a need for low cost VL assays, performance must not be compromised. Thus, new assays must be validated on metrics of limit of detection (LOD, accuracy, and dynamic range. Patient plasma samples from the Joint Clinical Research Centre in Uganda were de-identified and measured using both an existing VL assay (Abbott RealTime HIV-1 and our assay, which combines low cost reagents with a simplified method of RNA isolation termed Exclusion-Based Sample Preparation (ESP.71 patient samples with VLs ranging from 3,000,000 copies/mL were used to compare the two methods. We demonstrated equivalent LOD (~50 copies/mL and high accuracy (average difference between methods of 0.08 log, R2 = 0.97. Using expenditures from this trial, we estimate that the cost of the reagents and consumables for this assay to be approximately $5 USD. As cost is a significant barrier to implementation of VL testing, we anticipate that our assay will enhance access to this critical monitoring test in developing countries.
Using Exclusion-Based Sample Preparation (ESP) to Reduce Viral Load Assay Cost.
Berry, Scott M; Pezzi, Hannah M; Williams, Eram D; Loeb, Jennifer M; Guckenberger, David J; Lavanway, Alex J; Puchalski, Alice A; Kityo, Cissy M; Mugyenyi, Peter N; Graziano, Franklin M; Beebe, David J
2015-01-01
Viral load (VL) measurements are critical to the proper management of HIV in developing countries. However, access to VL assays is limited by the high cost and complexity of existing assays. While there is a need for low cost VL assays, performance must not be compromised. Thus, new assays must be validated on metrics of limit of detection (LOD), accuracy, and dynamic range. Patient plasma samples from the Joint Clinical Research Centre in Uganda were de-identified and measured using both an existing VL assay (Abbott RealTime HIV-1) and our assay, which combines low cost reagents with a simplified method of RNA isolation termed Exclusion-Based Sample Preparation (ESP).71 patient samples with VLs ranging from 3,000,000 copies/mL were used to compare the two methods. We demonstrated equivalent LOD (~50 copies/mL) and high accuracy (average difference between methods of 0.08 log, R2 = 0.97). Using expenditures from this trial, we estimate that the cost of the reagents and consumables for this assay to be approximately $5 USD. As cost is a significant barrier to implementation of VL testing, we anticipate that our assay will enhance access to this critical monitoring test in developing countries.
Variance-based uncertainty relations
Huang, Yichen
2010-01-01
It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.
Estimation of the variance effective population size in age structured populations.
Olsson, Fredrik; Hössjer, Ola
2015-05-01
The variance effective population size for age structured populations is generally hard to estimate and the temporal method often gives biased estimates. Here, we give an explicit expression for a correction factor which, combined with estimates from the temporal method, yield approximately unbiased estimates. The calculation of the correction factor requires knowledge of the age specific offspring distribution and survival probabilities as well as possible correlation between survival and reproductive success. In order to relax these requirements, we show that only first order moments of these distributions need to be known if the time between samples is large, or individuals from all age classes which reproduce are sampled. A very explicit approximate expression for the asymptotic coefficient of standard deviation of the estimator is derived, and it can be used to construct confidence intervals and optimal ways of weighting information from different markers. The asymptotic coefficient of standard deviation can also be used to design studies and we show that in order to maximize the precision for a given sample size, individuals from older age classes should be sampled since their expected variance of allele frequency change is higher and easier to estimate. However, for populations with fluctuating age class sizes, the accuracy of the method is reduced when samples are taken from older age classes with high demographic variation. We also present a method for simultaneous estimation of the variance effective and census population size. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
Systematic sampling with errors in sample locations
Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton
2010-01-01
Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...... analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung...
Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S
2016-01-01
In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.
Sun, Chenglu; Li, Wei; Chen, Wei
2017-08-10
For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array.
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize.
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize.
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Adriano Olnei Mallmann
2014-01-01
Full Text Available Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC. Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize.
Technique modifications for reducing the risks from amniocentesis or chorionic villus sampling.
Mujezinovic, Faris; Alfirevic, Zarko
2012-08-15
Currently, the techniques for amniocentesis and chorionic villus sampling (CVS) tend to be described in local and national guidelines, but certain aspects, including the choice of instruments, is predominantly based upon the operator's personal preference. A survey of practice in the specialist UK centres revealed a wide variation of practice; therefore, standardising any element of technique could potentially influence the safety of the procedure. The objective of this review was to compare the safety and effectiveness of all techniques of performing both amniocentesis and CVS for prenatal diagnosis. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (11 April 2012). We included all randomised comparisons of different methods of performing amniocentesis after 15 weeks' gestation, or CVS (transabdominal or transvaginal) with each other or with no testing. We excluded quasi-randomised studies (e.g. alternate allocation). Both review authors independently assessed for inclusion all the potential studies identified as a result of the search strategy. Both review authors independently assessed trial quality. Both review authors extracted data. Data were checked for accuracy. We included five randomised studies with total of 1049 women evaluating five different technique modifications during either amniocentesis (three studies) or CVS (two studies).For amniocentesis three interventions were evaluated - intramuscular progesterone, hexoprenaline and selecting high or low puncture sites for late 'blind' procedure - each intervention in a single small study. There was no conclusive evidence of benefit for any of them. The same applies for terbutaline tocolysis and use of continuous vacuum aspiration during CVS. Overall, the quality of evidence summarised in this review is not of sufficient quality to change current clinical practice. In the absence of clear evidence, the operators should continue to use methods and technique modifications with which
Petersen, A.; Jensen, Lars Bogø
2004-01-01
The quinolone resistance determining regions of gyrA and parC in four species of enterococci from environmental samples with reduced susceptibility to ciprofloxacin were sequenced. The nucleotide sequence variations of parC could be related to the different enterococcal species. Mutations...... in Enterococcus faecalis and Enterococcus faecium related to reduced susceptibility were identical to mutations detected in E jaecalis and E. faecium of clinical origin. A minimal inhibitory concentration of 8 mug ml(-1) to ciprofloxacin was not associated with any mutations in the gyrA and parC gene...... of Enterococcus casseliflavus and Enterococcus gallinarum. These two species may be intrinsically less susceptible to ciprofloxacin....
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
Vidal-Codina, F.; Nguyen, N. C.; Giles, M. B.; Peraire, J.
2015-09-01
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.
Vidal-Codina, F., E-mail: fvidal@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Nguyen, N.C., E-mail: cuongng@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Oxford (United Kingdom); Peraire, J., E-mail: peraire@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2015-09-15
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Variance optimal stopping for geometric Levy processes
Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund
2015-01-01
The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...
González-Vacarezza, N; Abad-Santos, F; Carcas-Sansuan, A; Dorado, P; Peñas-Lledó, E; Estévez-Carrizo, F; Llerena, A
2013-10-01
In bioequivalence studies, intra-individual variability (CV(w)) is critical in determining sample size. In particular, highly variable drugs may require enrollment of a greater number of subjects. We hypothesize that a strategy to reduce pharmacokinetic CV(w), and hence sample size and costs, would be to include subjects with decreased metabolic enzyme capacity for the drug under study. Therefore, two mirtazapine studies, two-way, two-period crossover design (n=68) were re-analysed to calculate the total CV(w) and the CV(w)s in three different CYP2D6 genotype groups (0, 1 and ≥ 2 active genes). The results showed that a 29.2 or 15.3% sample size reduction would have been possible if the recruitment had been of individuals carrying just 0 or 0 plus 1 CYP2D6 active genes, due to the lower CV(w). This suggests that there may be a role for pharmacogenetics in the design of bioequivalence studies to reduce sample size and costs, thus introducing a new paradigm for the biopharmaceutical evaluation of drug products.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed
Measuring past changes in ENSO variance using Mg/Ca measurements on individual planktic foraminifera
Marchitto, T. M.; Grist, H. R.; van Geen, A.
2013-12-01
Previous work in Soledad Basin, located off Baja California Sur in the eastern subtropical Pacific, supports a La Niña-like mean-state response to enhanced radiative forcing at both orbital and millennial (solar) timescales during the Holocene. Mg/Ca measurements on the planktic foraminifer Globigerina bulloides indicate cooling when insolation is higher, consistent with an ';ocean dynamical thermostat' response that shoals the thermocline and cools the surface in the eastern tropical Pacific. Some, but not all, numerical models simulate reduced ENSO variance (less frequent and/or less intense events) when the Pacific is driven into a La Niña-like mean state by radiative forcing. Hypothetically the question of ENSO variance can be examined by measuring individual planktic foraminiferal tests from within a sample interval. Koutavas et al. (2006) used d18O on single specimens of Globigerinoides ruber from the eastern equatorial Pacific to demonstrate a 50% reduction in variance at ~6 ka compared to ~2 ka, consistent with the sense of the model predictions at the orbital scale. Here we adapt this approach to Mg/Ca and apply it to the millennial-scale question. We present Mg/Ca measured on single specimens of G. bulloides (cold season) and G. ruber (warm season) from three time slices in Soledad Basin: the 20th century, the warm interval (and solar low) at 9.3 ka, and the cold interval (and solar high) at 9.8 ka. Each interval is uniformly sampled over a ~100-yr (~10-cm or more) window to ensure that our variance estimate is not biased by decadal-scale stochastic variability. Theoretically we can distinguish between changing ENSO variability and changing seasonality: a reduction in ENSO variance would result in narrowing of both the G. bulloides and G. ruber temperature distributions without necessarily changing the distance between their two medians; while a reduction in seasonality would cause the two species' distributions to move closer together.
Linear Minimum variance estimation fusion
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Corominas-Murtra, Bernat; Thurner, Stefan
2016-01-01
Sample Space Reducing processes (SSRP) offer an alternative new mechanism to understand the emergence of scaling in countless phenomena. We demonstrate that the scaling exponents associated to the dynamics of SSRPs converge to Zipf's law for a large class of systems. We show that Zipf's law emerges as a generic feature of diffusion on directed networks, regardless of its details, and that the exponent of the visiting time distribution is related to the amount of cycles in the network. These results are relevant for a series of applications in traffic, transport, and supply chain management.
Cosmological N-body simulations with suppressed variance
Angulo, Raul E.; Pontzen, Andrew
2016-10-01
We present and test a method that dramatically reduces variance arising from the sparse sampling of wavemodes in cosmological simulations. The method uses two simulations which are fixed (the initial Fourier mode amplitudes are fixed to the ensemble average power spectrum) and paired (with initial modes exactly out of phase). We measure the power spectrum, monopole and quadrupole redshift-space correlation functions, halo mass function and reduced bispectrum at z = 1. By these measures, predictions from a fixed pair can be as precise on non-linear scales as an average over 50 traditional simulations. The fixing procedure introduces a non-Gaussian correction to the initial conditions; we give an analytic argument showing why the simulations are still able to predict the mean properties of the Gaussian ensemble. We anticipate that the method will drive down the computational time requirements for accurate large-scale explorations of galaxy bias and clustering statistics, and facilitating the use of numerical simulations in cosmological data interpretation.
Freissinet, Caroline; McAdam, Amy; Archer, Doug; Buch, Arnaud; Eigenbrode, Jen; Franz, Heather; Glavin, Daniel; Ming, Doug; Navarro-Gonzalez, Rafael; Steele, Andrew; Stern, Jen; Mahaffy, Paul; SAM, The; MSL science Teams
2013-04-01
The SAM instrument suite onboard the Mars Science Laboratory (MSL) Curiosity Rover detected sulfur-bearing compounds during pyrolysis of soil fines obtained from aeolian material at Rocknest in Gale Crater. SO2 and H2S were identified by the quadrupole mass spectrometer (QMS) both in direct evolved gas analysis mass spectrometry (EGA-MS) and after gas chromatograph separation (GC-MS) [1]. In EGA-MS, the 34 Da trace shows at least 3 peaks. The first peak is evolved at relatively low temperature (T), near 400°C, and the other peaks evolved as part of a "hump" at higher T, between ~500°C and ~800°C. The higher T releases at 34 Da occur at T close to, but not at exactly the same, as an evolution of SO2 from the samples. We hypothesize that these 34 Da releases are due to H2S. This assertion is supported by peaks in 35 and 36 Da traces at the same T. The lower T release of 34 Da species corresponds to a large O2 release from the Rocknest samples, and can be attributed for the most part to an isotopologue of O2. However, the GCMS analysis of the temperature cut involving this first evolved peak displays evidence of H2S based on a comparison of the mass spectrum to a NIST library. Therefore, we propose that H2S must be contributing to the 400°C peak. The quantification of H2S from GCMS shows an amount of this species of less than 1 nmol. It is unclear what the source of this lower T H2S is and how sulfur remains in its reduced form instead of undergoing oxidation to SO2 at the temperature where O2 is evolved; laboratory work with relevant analogs to inform these questions is ongoing. An initial hypothesis for the low temperature H2S source is the product of a reaction between an S-bearing phase and a hydrogen-bearing phase, such as the abundant water evolved at less than 500°C from the sample. Potential sources of this water are adsorbed water or mineral structural water. There is also EGA-MS evidence of reaction of reduced S with CO2 in the pyrolysis oven to form
Kim, Yong-Hyun; Kim, Ki-Hyun
2012-10-02
To understand the ultimately lowest detection range of volatile organic compounds (VOCs) in air, application of a high sensitivity analytical system was investigated by coupling thermal desorption (TD) technique with gas chromatography (GC) and time-of-flight (TOF) mass spectrometry (MS). The performance of the TD-GC/TOF MS system was evaluated using liquid standards of 19 target VOCs prepared in the range of 35 pg to 2.79 ng per μL. Studies were carried out using both total ion chromatogram (TIC) and extracted ion chromatogram (EIC) mode. EIC mode was used for calibration to reduce background and to improve signal-to-noise. The detectability of 19 target VOCs, if assessed in terms of method detection limit (MDL, per US EPA definition) and limit of detection (LOD), averaged 5.90 pg and 0.122 pg, respectively, with the mean coefficient of correlation (R(2)) of 0.9975. The minimum quantifiable mass of target analytes, when determined using real air samples by the TD-GC/TOF MS, is highly comparable to the detection limits determined experimentally by standard. In fact, volumes for the actual detection of the major aromatic VOCs like benzene, toluene, and xylene (BTX) in ambient air samples were as low as 1.0 mL in the 0.11-2.25 ppb range. It was thus possible to demonstrate that most target compounds including those in low abundance could be reliably quantified at concentrations down to 0.1 ppb at sample volumes of less than 10 mL. The unique sensitivity of this advanced analytical system can ultimately lead to a shift in field sampling strategy with smaller air sample volumes facilitating faster, simpler air sampling (e.g., use of gas syringes rather than the relative complexity of pumps or bags/canisters), with greatly reduced risk of analyte breakthrough and minimal interference, e.g., from atmospheric humidity. The improved detection limits offered by this system can also enhance accuracy and measurement precision.
Sample variance in weak lensing: how many simulations are required?
Petri, Andrea; May, Morgan
2016-01-01
Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension $N_b$ with its simulated counterpart. An accurate estimate of the $N_b\\times N_b$ feature covariance matrix $\\mathbf{C}$ is essential to obtain accurate parameter confidence intervals. When $\\mathbf{C}$ is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of $N_r$ realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number $N_s\\leq N_r$ of independent ray-tracing $N$--body simulations. We study parameter confidence intervals as a function of ($N_s,N_r$) in the range $1\\leq N_s\\leq 200$ and $1\\leq N_r\\lesssim 10^5$. Previous work has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an $O(1/N_r)$ degradation of the parameter confidence intervals. Using a variety of lensin...
How to measure redshift-space distortions without sample variance
McDonald, Patrick
2008-01-01
We show how to use multiple tracers of large-scale density with different biases to measure the redshift-space distortion parameter beta=f/b=(dlnD/dlna)/b (where D is the growth rate and a the expansion factor), to a much better precision than one could achieve with a single tracer, to an arbitrary precision in the low noise limit. In combination with the power spectrum of the tracers this allows a much more precise measurement of the bias-free velocity divergence power spectrum, f^2 P_m - in fact, in the low noise limit f^2 P_m can be measured as well as would be possible if velocity divergence was observed directly, with rms improvement factor ~[5.2(beta^2+2 beta+2)/beta^2]^0.5 (e.g., ~10 times better than a single tracer for beta=0.4). This would allow a high precision determination of f D as a function of redshift with an error as low as 0.1%. We find up to two orders of magnitude improvement in Figure of Merit for the Dark Energy equation of state relative to Stage II, a factor of several better than oth...
Coates, P A; Ollerton, R L; Luzio, S D; Ismail, I S; Owens, D R
1993-11-01
Recent work in healthy subjects, the aged, and subjects with gestational diabetes or drug-induced insulin resistance using minimal model analysis of the tolbutamide-modified frequently sampled intravenous glucose tolerance test suggested that a reduced sampling regimen of 12 time points produced unbiased and generally acceptable estimates of insulin sensitivity (SI) and glucose effectiveness (SG) compared with a full sampling schedule of 30 time points. We have used data from 26 insulin-modified frequently sampled intravenous glucose tolerance tests in 21 subjects with NIDDM to derive and compare estimates of SI and SG from the full sampling schedule (SI(30), SG(30)) with those estimated from the suggested 12 time points (SI(12), SG(12)) and those estimated with the addition of a 25-min time point (SI(13), SG(13)). Percentage relative errors were calculated relative to the corresponding 30 time-point values. A statistically significant bias of 15% (97% confidence interval from 7.4 to 25.6%, interquartile range 25%) was introduced by the estimation of SI(12) but not SI(13) (1%, 97% confidence interval from -9.4 to 9.3%, interquartile range 21%). Results for SG(12) (-12%, 97% confidence interval from -46.7 to 1.2%, interquartile range 49%) and SG(13) (-5%, 97% confidence interval from -27.8 to 6.8%, interquartile range 37%) were statistically equivocal. The precision of estimation of SI(12), SG(12), and SG(13) measured by the interquartile range of the percentage relative errors was poor. The precision of determination measured by the median minimal model coefficient of variation was 18, 29, and 27% for SI(30), SI(12), and SI(13) and 9, 11, and 11% for SG(30), SG(12), and SG(13), respectively.(ABSTRACT TRUNCATED AT 250 WORDS)
Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan
2016-09-01
It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to -1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.
Ghasemi, Fakhradin; Kalatpour, Omid; Moghimbeigi, Abbas; Mohammadfam, Iraj
2017-03-04
High-risk unsafe behaviors (HRUBs) have been known as the main cause of occupational accidents. Considering the financial and societal costs of accidents and the limitations of available resources, there is an urgent need for managing unsafe behaviors at workplaces. The aim of the present study was to find strategies for decreasing the rate of HRUBs using an integrated approach of safety behavior sampling technique and Bayesian networks analysis. A cross-sectional study. The Bayesian network was constructed using a focus group approach. The required data was collected using the safety behavior sampling, and the parameters of the network were estimated using Expectation-Maximization algorithm. Using sensitivity analysis and belief updating, it was determined that which factors had the highest influences on unsafe behavior. Based on BN analyses, safety training was the most important factor influencing employees' behavior at the workplace. High quality safety training courses can reduce the rate of HRUBs about 10%. Moreover, the rate of HRUBs increased by decreasing the age of employees. The rate of HRUBs was higher in the afternoon and last days of a week. Among the investigated variables, training was the most important factor affecting safety behavior of employees. By holding high quality safety training courses, companies would be able to reduce the rate of HRUBs significantly.
Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc
2016-03-01
One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.
Rauscher, Bernard J.; Moseley, S. H.; Arendt, R. G.; Fixsen, D.; Lindler, D.; Loose, M.
2012-01-01
In a previous paper, we described a method for significantly reducing the read noise of HAWAII-2RG (H2RG) and SIDECAR application specific integrated circuit (ASIC) based detector systems by making better use of reference signals. "Improved Reference Sampling & Subtraction” (IRS2 pronounced "IRS-square") is based on: (1) making better use of the H2RG's reference output, (2) sampling reference pixels more frequently in the time domain, and (3) optimal subtraction of both the reference output and reference pixels in the Fourier domain. Here we demonstrate that IRS2 works as expected using an engineering grade James Webb Space Telescope (JWST) SIDECAR ASIC and H2RG detector array. We were able to reduce the read noise per frame from 25 e- rms using traditional JWST readout to 10 e- rms per frame using IRS2. The only aspect of the system that we changed to make these impressive improvements was the SIDECAR ASIC readout software -we did not change the hardware.
Rauscher, Bernard J.; Arendt, Richard G.; Fixen, D. J.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Wilson, Donna V.; Xenophontos, Christos
2011-10-01
In a previous paper,1 we described a method for significantly reducing the read noise of HAWAII-2RG (H2RG) and SIDECAR application specific integrated circuit (ASIC) based detector systems by making better use of reference signals. "Improved Reference Sampling & Subtraction" (IRS2; pronounced "IRS-square") is based on: (1) making better use of the H2RG's reference output, (2) sampling reference pixels more frequently in the time domain, and (3) optimal subtraction of both the reference output and reference pixels in the Fourier domain. Here we demonstrate that IRS2 works as expected using an engineering grade James Webb Space Telescope (JWST) SIDECAR ASIC and H2RG detector array. We were able to reduce the read noise per frame from 25 e- rms using traditional JWST readout to 10 e- rms per frame using IRS2. The only aspect of the system that we changed to make these impressive improvements was the SIDECAR ASIC readout software -we did not change the hardware.
A Hold-out method to correct PCA variance inflation
Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai
2012-01-01
In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was int...
Similarities Derived from 3-D Nonlinear Psychophysics: Variance Distributions.
Gregson, Robert A. M.
1994-01-01
The derivation of the variance of similarity judgments is made from the 3-D process in nonlinear psychophysics. The idea of separability of dimensions in metric space theories of similarity is replaced by one parameter that represents the degree of a form of interdimensional cross-sampling. (SLD)
Variance Components for NLS: Partitioning the Design Effect.
Folsom, Ralph E., Jr.
This memorandum demonstrates a variance components methodology for partitioning the overall design effect (D) for a ratio mean into stratification (S), unequal weighting (W), and clustering (C) effects, so that D = WSC. In section 2, a sample selection scheme modeled after the National Longitudinal Study of the High School Class of 1972 (NKS)…
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Zhou, Hao
predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
da Jornada, Felipe H.; Qiu, Diana Y.; Louie, Steven G.
2017-01-01
First-principles calculations based on many-electron perturbation theory methods, such as the ab initio G W and G W plus Bethe-Salpeter equation (G W -BSE) approach, are reliable ways to predict quasiparticle and optical properties of materials, respectively. However, these methods involve more care in treating the electron-electron interaction and are considerably more computationally demanding when applied to systems with reduced dimensionality, since the electronic confinement leads to a slower convergence of sums over the Brillouin zone due to a much more complicated screening environment that manifests in the "head" and "neck" elements of the dielectric matrix. Here we present two schemes to sample the Brillouin zone for G W and G W -BSE calculations: the nonuniform neck subsampling method and the clustered sampling interpolation method, which can respectively be used for a family of single-particle problems, such as G W calculations, and for problems involving the scattering of two-particle states, such as when solving the BSE. We tested these methods on several few-layer semiconductors and graphene and show that they perform a much more efficient sampling of the Brillouin zone and yield two to three orders of magnitude reduction in the computer time. These two methods can be readily incorporated into several ab initio packages that compute electronic and optical properties through the G W and G W -BSE approaches.
Variance components for body weight in Japanese quails (Coturnix japonica
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Validation technique using mean and variance of kriging model
Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)
2007-07-01
To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.
The positioning algorithm based on feature variance of billet character
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Russell, Matthew R; Lilley, Kathryn S
2012-12-21
The biological variance in protein expression of interest to biologists can only be accessed if the technical variance of the protein quantification method is low compared with the biological variance. Technical variance is dependent on the protocol employed within a quantitative proteomics experiment and accumulated with every additional step. The magnitude of additional variance incurred by each step of a protocol should be determined to enable design of experiments maximally sensitive to differential protein expression. Metabolic labelling techniques for MS based quantitative proteomics enable labelled and unlabelled samples to be combined at the tissue level. It has been widely assumed, although not yet empirically verified, that early combination of samples minimises technical variance in relative quantification. This study presents a pipeline to determine the variance incurred at each stage of a common quantitative proteomics protocol involving metabolic labelling. We apply this pipeline to determine whether early combination of samples in a protocol leads to significant reduction in experimental variance. We also identify which stage within the protocol is associated with maximum variance. This provides a blueprint by which the variance associated with each stage of any protocol can be dissected and utilised to influence optimal experimental design.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... shall modify the tag, label, or other certification required by § 1010.2 to state: (1) That the...
Analysis of variance for model output
Jansen, M.J.W.
1999-01-01
A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va
The Correct Kriging Variance Estimated by Bootstrapping
den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.
2004-01-01
The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrappi
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Variance Risk Premia on Stocks and Bonds
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
is different from the equity variance risk premium. Third, the conditional correlation between stock and bond market variance risk premium switches sign often and ranges between -60% and +90%. We then show that these stylized facts pose a challenge to standard consumption-based asset pricing models....
Mhaidat, Fatin; ALharbi, Bassam H. M.
2016-01-01
This study aimed at identifying the level of depression and sense of insecurity among a sample of female refugee adolescents, and the impact of an indicative program for reducing cognitive distortions in reducing depression and their sense of insecurity. The study sample consisted of 220 female refugee adolescents, 7th to 1st secondary stage, at…
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Use of High-Frequency In-Home Monitoring Data May Reduce Sample Sizes Needed in Clinical Trials.
Hiroko H Dodge
walking speed collected at baseline, 262 subjects are required. Similarly for computer use, 26 subjects are required.Individual-specific thresholds of low functional performance based on high-frequency in-home monitoring data distinguish trajectories of MCI from NC and could substantially reduce sample sizes needed in dementia prevention RCTs.
Salmon, Michael; Carthy, Raymond R.; Lohmann, Catherine M. F.; Lohmann, Kenneth J.; Wyneken, Jeanette
2012-01-01
In numerous studies involving hatchling sea turtles, researchers have collected small numbers of hatchlings from nests a few hours before the turtles would otherwise have emerged naturally. This procedure makes it possible to do experiments in which the behavioral or physiological responses of numerous hatchlings must be tested in a limited period of time, and also allows hatchlings to be released back into the sea in time to migrate offshore before dawn. In principle, however, the procedure might inadvertently reduce nest productivity (the number of hatchlings that successfully leave the nest), if digging into a nest prior to emergence somehow reduces the ability of the remaining turtles to emerge. We compared nest productivity in 67 experimental loggerhead nests, from which we removed 10 hatchlings before a natural emergence, to 95 control nests left undisturbed before a natural emergence. The 2 groups showed no statistical differences in productivity. We conclude that taking a few hatchlings from a loggerhead nest shortly before a natural emergence has no negative impact on hatchling production if sampling is done with care at locations where there are few nest predators, and at sites where an emergence can be predicted because nest deposition dates are known.
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...
Earp Madalene A
2011-11-01
Full Text Available Abstract Background Until recently, genome-wide association studies (GWAS have been restricted to research groups with the budget necessary to genotype hundreds, if not thousands, of samples. Replacing individual genotyping with genotyping of DNA pools in Phase I of a GWAS has proven successful, and dramatically altered the financial feasibility of this approach. When conducting a pool-based GWAS, how well SNP allele frequency is estimated from a DNA pool will influence a study's power to detect associations. Here we address how to control the variance in allele frequency estimation when DNAs are pooled, and how to plan and conduct the most efficient well-powered pool-based GWAS. Methods By examining the variation in allele frequency estimation on SNP arrays between and within DNA pools we determine how array variance [var(earray] and pool-construction variance [var(econstruction] contribute to the total variance of allele frequency estimation. This information is useful in deciding whether replicate arrays or replicate pools are most useful in reducing variance. Our analysis is based on 27 DNA pools ranging in size from 74 to 446 individual samples, genotyped on a collective total of 128 Illumina beadarrays: 24 1M-Single, 32 1M-Duo, and 72 660-Quad. Results For all three Illumina SNP array types our estimates of var(earray were similar, between 3-4 × 10-4 for normalized data. Var(econstruction accounted for between 20-40% of pooling variance across 27 pools in normalized data. Conclusions We conclude that relative to var(earray, var(econstruction is of less importance in reducing the variance in allele frequency estimation from DNA pools; however, our data suggests that on average it may be more important than previously thought. We have prepared a simple online tool, PoolingPlanner (available at http://www.kchew.ca/PoolingPlanner/, which calculates the effective sample size (ESS of a DNA pool given a range of replicate array values. ESS can
Karunathilaka, Sanjeewa R; Farris, Samantha; Mossoba, Magdi M; Moore, Jeffrey C; Yakes, Betsy Jean
2016-06-01
There is a need to develop rapid tools to screen milk products for economically motivated adulteration. An understanding of the physiochemical variability within skim milk powder (SMP) and non-fat dry milk (NFDM) is the key to establishing the natural differences of these commodities prior to the development of non-targeted detection methods. This study explored the sources of variance in 71 commercial SMP and NFDM samples using Raman spectroscopy and principal component analysis (PCA) and characterised the largest number of commercial milk powders acquired from a broad number of international manufacturers. Spectral pre-processing using a gap-segment derivative transformation (gap size = 5, segment width = 9, fourth derivative) in combination with sample normalisation was necessary to reduce the fluorescence background of the milk powder samples. PC scores plots revealed no clear trends for various parameters, including day of analysis, powder type, supplier and processing temperatures, while the largest variance was due to irreproducibility in sample positioning. Significant chemical sources of variances were explained by using the spectral features in the PC loadings plots where four samples from the same manufacturer were determined to likely contain an additional component or lactose anomers, and one additional sample was identified as an outlier and likely containing an adulterant or differing quality components. The variance study discussed herein with this large, diverse set of milk powders holds promise for future use as a non-targeted screening method that could be applied to commercial milk powders.
Karunathilaka, Sanjeewa R.; Farris, Samantha; Mossoba, Magdi M.; Moore, Jeffrey C.; Yakes, Betsy Jean
2016-01-01
There is a need to develop rapid tools to screen milk products for economically motivated adulteration. An understanding of the physiochemical variability within skim milk powder (SMP) and non-fat dry milk (NFDM) is the key to establishing the natural differences of these commodities prior to the development of non-targeted detection methods. This study explored the sources of variance in 71 commercial SMP and NFDM samples using Raman spectroscopy and principal component analysis (PCA) and characterised the largest number of commercial milk powders acquired from a broad number of international manufacturers. Spectral pre-processing using a gap-segment derivative transformation (gap size = 5, segment width = 9, fourth derivative) in combination with sample normalisation was necessary to reduce the fluorescence background of the milk powder samples. PC scores plots revealed no clear trends for various parameters, including day of analysis, powder type, supplier and processing temperatures, while the largest variance was due to irreproducibility in sample positioning. Significant chemical sources of variances were explained by using the spectral features in the PC loadings plots where four samples from the same manufacturer were determined to likely contain an additional component or lactose anomers, and one additional sample was identified as an outlier and likely containing an adulterant or differing quality components. The variance study discussed herein with this large, diverse set of milk powders holds promise for future use as a non-targeted screening method that could be applied to commercial milk powders. PMID:27167451
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
78 FR 14122 - Revocation of Permanent Variances
2013-03-04
... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....
Variance components in discrete force production tasks.
Varadhan, S K M; Zatsiorsky, Vladimir M; Latash, Mark L
2010-09-01
The study addresses the relationships between task parameters and two components of variance, "good" and "bad", during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does ("bad") and does not ("good") affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the "bad" variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. "Good" variance scaled linearly with force magnitude and did not depend on force rate. "Bad" variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic tasks was associated with
Functional analysis of variance for association studies.
Olga A Vsevolozhskaya
Full Text Available While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1 it tests for a joint effect of gene variants, including both common and rare; (2 it fully utilizes linkage disequilibrium and genetic position information; and (3 allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM, - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Comment on a Wilcox Test Statistic for Comparing Means When Variances Are Unequal.
Hsiung, Tung-Hsing; And Others
1994-01-01
The alternative proposed by Wilcox (1989) to the James second-order statistic for comparing population means when variances are heterogeneous can sometimes be invalid. The degree to which the procedure is invalid depends on differences in sample size, the expected values of the observations, and population variances. (SLD)
A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions...
Estimation of dominance variance in purebred Yorkshire swine.
Culbertson, M S; Mabry, J W; Misztal, I; Gengler, N; Bertrand, J K; Varona, L
1998-02-01
We used 179,485 Yorkshire reproductive and 239,354 Yorkshire growth records to estimate additive and dominance variances by Method Fraktur R. Estimates were obtained for number born alive (NBA), 21-d litter weight (LWT), days to 104.5 kg (DAYS), and backfat at 104.5 kg (BF). The single-trait models for NBA and LWT included the fixed effects of contemporary group and regression on inbreeding percentage and the random effects mate within contemporary group, animal permanent environment, animal additive, and parental dominance. The single-trait models for DAYS and BF included the fixed effects of contemporary group, sex, and regression on inbreeding percentage and the random effects litter of birth, dam permanent environment, animal additive, and parental dominance. Final estimates were obtained from six samples for each trait. Regression coefficients for 10% inbreeding were found to be -.23 for NBA, -.52 kg for LWT, 2.1 d for DAYS, and 0 mm for BF. Estimates of additive and dominance variances expressed as a percentage of phenotypic variances were, respectively, 8.8 +/- .5 and 2.2 +/- .7 for NBA, 8.1 +/- 1.1 and 6.3 +/- .9 for LWT, 33.2 +/- .4 and 10.3 +/- 1.5 for DAYS, and 43.6 +/- .9 and 4.8 +/- .7 for BF. The ratio of dominance to additive variances ranged from .78 to .11.
ChaoHan; BenYang; WenShuZuo; YanSongLiu; GangZheng; LiYang; MeiZhuZheng
2016-01-01
Background:Although sentinel lymph node biopsy (SLNB) can accurately predict the status of axillary lymph node (ALN) metastasis, the high false‑negative rate (FNR) of SLNB is still the main obstacle for the treatment of patients who receive SLNB instead of ALN dissection (ALND). The purpose of this study was to evaluate the clinical signiifcance of SLNB combined with peripheral lymph node (PLN) sampling for reducing the FNR for breast cancer and to discuss the effect of “skip metastasis” on the FNR of SLNB. Methods:At Shandong Cancer Hospital Affliated to Shandong University between March 1, 2012 and June 30, 2015, the sentinel lymph nodes (SLNs) of 596 patients with breast cancer were examined using radiocolloids with blue dye tracer. First, the SLNs were removed; then, the area surrounding the original SLNs was selected, and the visible lymph nodes in a ifeld of 3–5cm in diameter around the center (i.e., PLNs) were removed, avoiding damage to the structure of the breast. Finally, ALND was performed. The SLNs, PLNs, and remaining ALNs underwent pathologic examination, and the relationship between them was analyzed. Results:The identiifcation rate of SLNs in the 596 patients was 95.1% (567/596); the metastasis rate of ALNs was 33.7% (191/567); the FNR of pure SLNB was 9.9% (19/191); and after the SLNs and PLNs were eliminated, the FNR was 4.2% (8/191), which was signiifcantly decreased compared with the FNR before removal of PLNs (P=0.028). According to the detected number (N) of SLNs, the patients were divided into four groups of N=1, 2, 3, and≥4; the FNR in these groups was 19.6, 9.8, 7.3, and 2.3%, respectively. For the patients with≤2 or≤3 detected SLNs, the FNR after removal of PLNs was signiifcantly decreased compared with that before removal of PLNs (N≤2: 14.0% vs. 4.7%, P=0.019; N≤3: 12.2% vs. 4.7%,P=0.021), whereas for patients with≥4 detected SLNs, the decrease in FNR was not statistically signiifcant (P=1.000). In the entire cohorts
2010-01-01
... Inspection Code Lot size ranges—Number of containers in lot Type of plan Sample size Acceptable quality... 7 Agriculture 2 2010-01-01 2010-01-01 false Sampling plans for reduced condition of container... Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE...
A PLL Exploiting Sub-Sampling of the VCO Output to Reduce In-band Phase Noise
Gao, X.; Klumperink, Eric A.M.; Boshali, Mounir; Nauta, Bram
2009-01-01
Abstract— In this paper, we present a 2.2-GHz low jitter PLL based on sub-sampling. It uses a phase-detector/charge-pump (PD/CP) that sub-samples the VCO output with the reference clock. In contrast to what happens in a classical PLL, the PD/CP noise is not multiplied by N2 in this sub-sampling PLL.
A PLL Exploiting Sub-Sampling of the VCO Output to Reduce In-band Phase Noise
Gao, Xiang; Klumperink, Eric A.M.; Boshali, Mounir; Nauta, Bram
2009-01-01
Abstract— In this paper, we present a 2.2-GHz low jitter PLL based on sub-sampling. It uses a phase-detector/charge-pump (PD/CP) that sub-samples the VCO output with the reference clock. In contrast to what happens in a classical PLL, the PD/CP noise is not multiplied by N2 in this sub-sampling PLL.
Estimating quadratic variation using realized variance
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a semimar......This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Towards reducing the cloud-induced sampling biases in MODIS LST data: a case study from Greenland
Karami, M.; Hansen, B. U.
2016-12-01
Satellite-driven Land Surface Temperature (LST) datasets are essential for characterizing climate change impacts on terrestrial ecosystems, as well as a wide range of surface-atmosphere studies. In the past one and a half decade, NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) has provided the scientific community with LST estimates on a global scale with reasonable spatial resolution and revisit time. However, the use of MODIS LST for climate studies is complicated by the simple fact that the observations can only be made under clear-sky conditions. In regions with frequent overcast skies, this can result in the calculated climatic variables deviating from the actual surface conditions. In the present study, we propose and validate a framework based on model-driven downwelling radiation data from ERA-Interim and instantenous LST observations from both MODIS Terra and Aqua, in order to minimize the clear-sky sampling bias. The framework is validated on a cloud-affected MODIS scene covering parts of Greenland (h15v02), and by incorporating in-situ data from a number of monitoring stations in the area. The results indicate that the proposed method is able to increase the number of daily LST estimates by a factor of 2.07 and reduce the skewnewss of monthly distribution of the successful estimates by a factor of 0.22. Considering that these improvements are achieved mainly through introducing data from partially overcast days, the estimated climatic variables show better agreement with the ground truth. The overall accuracy of the model in estimating in-situ mean daily LST remained satisfactory even after incoprporating the daily downweling radiation from ERA-interim (RMSE=0.41 °K, R-squared=0.992). Nonetheless, since technical constraints are expected to continue limiting the use of high temporal resolution satellites in high latitudes, more research is required to quantify and deal with various types of cloud-induced biases present in the data from
Integrating Variances into an Analytical Database
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Sources of variance in ocular microtremor.
Sheahan, N F; Coakley, D; Bolger, C; O'Neill, D; Fry, G; Phillips, J; Malone, J F
1994-02-01
This study presents a preliminary investigation of the sources of variance in the measurement of ocular microtremor frequency in a normal population. When the results from both experienced and relatively inexperienced operators are pooled, factors that contribute significantly to the total variance include the measurement procedure (p < 0.001), day-to-day variations within subjects (p < 0.001), and inter-subject differences (p < 0.01). Operator experience plays a role in determining the measurement precision: the intra-subject coefficient of variation is about 5% for a very experienced operator, and about 14% for a relatively inexperienced operator.
PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS
Daniel Menezes Cavalcante
2016-07-01
Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
Muhammad Qaiser Shahbaz
2007-01-01
Full Text Available A new approximate formula for sampling variance of Horvitz–Thompson (1952 estimator has been obtained. Empirical study of the approximate formula has been given to see its performance.
Sztepanacz, Jacqueline L; McGuigan, Katrina; Blows, Mark W
2017-08-01
The genetic basis of stochastic variation within a defined environment, and the consequences of such micro-environmental variance for fitness are poorly understood . Using a multigenerational breeding design in Drosophila serrata, we demonstrated that the micro-environmental variance in a set of morphological wing traits in a randomly mating population had significant additive genetic variance in most single wing traits. Although heritability was generally low (micro-environmental variance is an evolvable trait. Multivariate analyses demonstrated that the micro-environmental variance in wings was genetically correlated among single traits, indicating that common mechanisms of environmental buffering exist for this functionally related set of traits. In addition, through the dominance genetic covariance between the major axes of micro-environmental variance and fitness, we demonstrated that micro-environmental variance shares a genetic basis with fitness, and that the pattern of selection is suggestive of variance-reducing selection acting on micro-environmental variance. Copyright © 2017 by the Genetics Society of America.
Managing product inherent variance during treatment
Verdenius, F.
1996-01-01
The natural variance of agricultural product parameters complicates recipe planning for product treatment, i.e. the process of transforming a product batch from its initial state to a prespecified final state. For a specific product P, recipes are currently composed by human experts on the basis of
The Variance of Language in Different Contexts
申一宁
2012-01-01
language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.
Regression calibration with heteroscedastic error variance.
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Formative Use of Intuitive Analysis of Variance
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Linear transformations of variance/covariance matrices
Parois, P.J.A.; Lutz, M.
2011-01-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Genetic control of residual variance of yearling weight in Nellore beef cattle.
Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R
2017-04-01
There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting its presence beyond the scale effect. The DHGLM showed higher
张峰; 吕震宙; 崔利杰
2011-01-01
基于β面的截断重要抽样法可以用来求解单失效模式可靠性灵敏度.该方法在设计点处作失效面的虚拟切面β面,而β面将变量空间分割成重要抽样区域R和非重要抽样区域S.在R和S区域分别建立相应的截断重要抽样密度函数hR(x)和hs(x),从hR(x)和hs(x)中抽取的样本量按照R和S区域对可靠性灵敏度的贡献来分配,并通过迭代模拟计算来得到.本文推导了基于β面截断重要抽样法的可靠性灵敏度估计值方差和变异系数的计算公式,并将该方法推广应用到并联系统中.算例结果表明:在估计值相对误差小于2%、可靠性灵敏度估计值变异系数相同时,基于β面的截断重要抽样法的可靠性灵敏度估计所需的样本数比传统重要抽样法、β球截断重要抽样法计算量少.%A novel β hyper-plane based importance sampling method is presented to estimate reliability sensitivity of a structure. By introducing a virtual hyper-plane tangent to the failure surface, the variable space is separated into an importance region R and a unimportance region S, on which the truncated importance sampling functions hR(x) and hs(x) are established, respectively. The sampling numbers generated from hR(x) and hs(x) are dependent on the contribution of the reliability sensitivity, which is determined by the iterative simulations. The formulae of the reliability sensitivity estimation, the variance and the coefficient of variation are derived for the presented β hyper-plane importance sampling method. The presented method is suitable for the reliability sensitivity estimation of both the single failure mode and the multiple failure mode in parallel. Examples show that the proposed method is more efficient than the traditional importance sampling method and the β hyper-sphere importance sampling method, in the case that the variation coefficients of three estimations keep the same quantity and the relative errors of the
40 CFR 142.43 - Disposition of a variance request.
2010-07-01
... during the period of variance shall specify interim treatment techniques, methods and equipment, and... the specified treatment technique for which the variance was granted is necessary to protect...
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
Lee, Tae Hee; Kim, Ho Sung [Hanyang University, Seoul (Korea, Republic of)
2010-05-15
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean{sub 0} validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean{sub 0} validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.
Aukland, S M; Westerhausen, R; Plessen, K J
2011-01-01
BACKGROUND AND PURPOSE: Several studies suggest that VLBW is associated with a reduced CC size later in life. We aimed to clarify this in a prospective, controlled study of 19-year-olds, hypothesizing that those with LBWs had smaller subregions of CC than the age-matched controls, even after...
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Realized Variance and Market Microstructure Noise
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Linear transformations of variance/covariance matrices.
Parois, Pascal; Lutz, Martin
2011-07-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance matrix. For the transformation of second-rank tensors it is suggested that the 3 × 3 matrix is re-written into a 9 × 1 vector. The transformation of the corresponding variance/covariance matrix is then straightforward and easily implemented into computer software. This method is applied in the transformation of anisotropic displacement parameters, the calculation of equivalent isotropic displacement parameters, the comparison of refinements in different space-group settings and the calculation of standard uncertainties of eigenvalues.
Variance and covariance of accumulated displacement estimates.
Bayer, Matthew; Hall, Timothy J
2013-04-01
Tracking large deformations in tissue using ultrasound can enable the reconstruction of nonlinear elastic parameters, but poses a challenge to displacement estimation algorithms. Such large deformations have to be broken up into steps, each of which contributes an estimation error to the final accumulated displacement map. The work reported here measured the error variance for single-step and accumulated displacement estimates using one-dimensional numerical simulations of ultrasound echo signals, subjected to tissue strain and electronic noise. The covariance between accumulation steps was also computed. These simulations show that errors due to electronic noise are negatively correlated between steps, and therefore accumulate slowly, whereas errors due to tissue deformation are positively correlated and accumulate quickly. For reasonably low electronic noise levels, the error variance in the accumulated displacement estimates is remarkably constant as a function of step size, but increases with the length of the tracking kernel.
Realized Variance and Market Microstructure Noise
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Lícia P. S. Cruz
2008-01-01
Full Text Available This work presents a review of sampling and analytical methods that can be applied to atmospheric traces of reduced sulphur compounds (RSC in the atmosphere. Sampling methodology involving discontinuous methods with preconcentration is mostly used. For the most part, adsorption on solids and cryogenic capture are applied as a procedure. The analysis of these compounds has been done mainly by gas chromatography with FPD, fluorescence and spectrophotometry. Advantages and disadvantages of the methodologies are also mentioned in this paper, aiming to guide the reader towards the most appropriate choice of a sampling and analytical method for RSCs.
Boklund, Anette; Dahl, J.; Alban, L.
2013-01-01
Confirming freedom from disease is important for export of animals and animal products. In Denmark, an intensive surveillance program is in place for Aujeszky's disease (AD) and classical swine fever (CSF), including 34,974 blood samples tested for AD and 37,414 samples tested for CSF (2008 figures......). In the current system, 3.5% of sows and boars for export or slaughter are tested for both diseases, as well as all boars before entering boar stations. Furthermore, nucleus herds are tested every third month for classical swine fever. We investigated, whether the sample size could be reduced without compromising...
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Eigenvalue variance bounds for covariance matrices
Dallaporta, Sandrine
2013-01-01
This work is concerned with finite range bounds on the variance of individual eigenvalues of random covariance matrices, both in the bulk and at the edge of the spectrum. In a preceding paper, the author established analogous results for Wigner matrices and stated the results for covariance matrices. They are proved in the present paper. Relying on the LUE example, which needs to be investigated first, the main bounds are extended to complex covariance matrices by means of the Tao, Vu and Wan...
High-dimensional regression with unknown variance
Giraud, Christophe; Verzelen, Nicolas
2011-01-01
We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.
Fractional constant elasticity of variance model
Ngai Hang Chan; Chi Tim Ng
2007-01-01
This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Discussion on variance reduction technique for shielding
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Replica approach to mean-variance portfolio optimization
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
The Parabolic variance (PVAR), a wavelet variance based on least-square fit
Vernotte, F; Bourgeois, P -Y; Rubiola, E
2015-01-01
The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...
Gender variance in childhood and sexual orientation in adulthood: a prospective study.
Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T
2013-11-01
Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
Representative process sampling for reliable data analysis
Julius, Lars Petersen; Esbensen, Kim
2005-01-01
(sampling variances) can be reduced greatly however, and sampling biases can be eliminated completely, by respecting a simple set of rules and guidelines provided by TOS. A systematic approach for description of process heterogeneity furnishes in-depth knowledge about the specific variability of any 1-D lot...... of any hidden cycle, eliminating the risk of underestimating process variation. A brief description of selected hardware for extraction of samples from 1-D lots is provided in order to illustrate the key issues to consider when installing new, or optimizing existing sampling devices and procedures...
Estimation of the additive and dominance variances in South African Landrace pigs
Norris, D.; Varona Aguado, Luís; Visser, D. P.; Theron, H. E.; Voordewind, S. F.; Nesambuni, E. A.
2006-01-01
The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate variance components and genetic parameters were calculated from posterior distributions. Estimates of additive genetic variance were 0.669, 43.46 d2 and 9.02 kg2 for NBA, FI and LW...
Visual SLAM Using Variance Grid Maps
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
A relation between information entropy and variance
Pandey, Biswajit
2016-01-01
We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2010-01-01
This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Dimension reduction based on weighted variance estimate
无
2009-01-01
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Dimension reduction based on weighted variance estimate
ZHAO JunLong; XU XingZhong
2009-01-01
In this paper,we propose a new estimate for dimension reduction,called the weighted variance estimate (WVE),which includes Sliced Average Variance Estimate (SAVE) as a special case.Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension.And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR),SAVE,etc.Many methods such as SIR,SAVE,etc.usually put the same weight on each observation to estimate central subspace (CS).By introducing a weight function,WVE puts different weights on different observations according to distance of observations from CS.The weight function makes WVE have very good performance in general and complicated situations,for example,the distribution of regressor deviating severely from elliptical distribution which is the base of many methods,such as SIR,etc.And compared with many existing methods,WVE is insensitive to the distribution of the regressor.The consistency of the WVE is established.Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Estimation of measurement variance in the context of environment statistics
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del
Hou Ying-li; Liu Guo-xin; Jiang Chun-lan
2015-01-01
In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the eﬃcient strategies and eﬃcient frontier) are derived explicitly.
Power and Sample Size Calculations for Contrast Analysis in ANCOVA.
Shieh, Gwowen
2017-01-01
Analysis of covariance (ANCOVA) is commonly used in behavioral and educational research to reduce the error variance and improve the power of analysis of variance by adjusting the covariate effects. For planning and evaluating randomized ANCOVA designs, a simple sample-size formula has been proposed to account for the variance deflation factor in the comparison of two treatment groups. The objective of this article is to highlight an overlooked and potential problem of the exiting approximation and to provide an alternative and exact solution of power and sample size assessments for testing treatment contrasts. Numerical investigations are conducted to reveal the relative performance of the two procedures as a reliable technique to accommodate the covariate features that make ANCOVA design particularly distinctive. The described approach has important advantages over the current method in general applicability, methodological justification, and overall accuracy. To enhance the practical usefulness, computer algorithms are presented to implement the recommended power calculations and sample-size determinations.
Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability
Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco
We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data...... and realized variances, our model allows to infer the occurrence and size of extreme variance events, and construct indicators signalling agents sentiment towards future market conditions. Our results show that excess returns are to a large extent explained by fear or optimism towards future extreme variance...
Estimation of Epistatic Variance Components and Heritability in Founder Populations and Crosses
Young, Alexander I.; Durbin, Richard
2014-01-01
Genetic association studies have explained only a small proportion of the estimated heritability of complex traits, leaving the remaining heritability “missing.” Genetic interactions have been proposed as an explanation for this, because they lead to overestimates of the heritability and are hard to detect. Whether this explanation is true depends on the proportion of variance attributable to genetic interactions, which is difficult to measure in outbred populations. Founder populations exhibit a greater range of kinship than outbred populations, which helps in fitting the epistatic variance. We extend classic theory to founder populations, giving the covariance between individuals due to epistasis of any order. We recover the classic theory as a limit, and we derive a recently proposed estimator of the narrow sense heritability as a corollary. We extend the variance decomposition to include dominance. We show in simulations that it would be possible to estimate the variance from pairwise interactions with samples of a few thousand from strongly bottlenecked human founder populations, and we provide an analytical approximation of the standard error. Applying these methods to 46 traits measured in a yeast (Saccharomyces cerevisiae) cross, we estimate that pairwise interactions explain 10% of the phenotypic variance on average and that third- and higher-order interactions explain 14% of the phenotypic variance on average. We search for third-order interactions, discovering an interaction that is shared between two traits. Our methods will be relevant to future studies of epistatic variance in founder populations and crosses. PMID:25326236
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2011-01-01
This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....
Power Estimation in Multivariate Analysis of Variance
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Tauchen, George; Zhou, Hao
Motivated by the implications from a stylized self-contained general equilibrium model incorporating the effects of time-varying economic uncertainty, we show that the difference between implied and realized variation, or the variance risk premium, is able to explain a non-trivial fraction...... of the time series variation in post 1990 aggregate stock market returns, with high (low) premia predicting high (low) future returns. Our empirical results depend crucially on the use of "model-free," as opposed to Black- Scholes, options implied volatilities, along with accurate realized variation measures...... constructed from high-frequency intraday, as opposed to daily, data. The magnitude of the predictability is particularly strong at the intermediate quarterly return horizon, where it dominates that afforded by other popular predictor variables, like the P/E ratio, the default spread, and the consumption...
Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A
2013-09-01
contributed substantially to micro-environmental sensitivity. Addition of random regressions to the mean model did not reduce heterogeneity in residual variance and that genetic heterogeneity of residual variance was not simply an effect of an incomplete mean model. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.
Severina Carla Vieira Cunha Lima
2013-04-01
Full Text Available OBJECTIVE: The aim of this study was to describe the sources of dietary variance, and determine the variance ratios and the number of days needed for estimating the habitual diet of adolescents. METHODS: Two 24 hour food recalls were used for estimating the energy, macronutrient, fatty acid, fiber and cholesterol intakes of 366 adolescents attending Public Schools in Natal, Rio Grande do Norte, Brazil. The variance ratio between the intrapersonal and interpersonal variances, determined by Analysis of Variance, was calculated. The number of days needed for estimating the habitual intake of each nutrient was given by the hypothetical correlation (r>0.9 between the actual and observed nutrient intakes. RESULTS: Sources of interpersonal variation were higher for all nutrients and in both genders. Variance ratios were OBJETIVO: O objetivo deste estudo foi descrever as fontes de variância da dieta, determinar as razões de variâncias e o número de dias necessários para estimar a dieta habitual em adolescentes. MÉTODOS: A ingestão de energia, macronutrientes, ácidos graxos, fibra e colesterol foram estimadas por meio de dois recordatórios de 24 horas, aplicados em 366 adolescentes de escolas públicas de Natal, Rio Grande do Norte. A razão de variância foi calculada entre o componente da variância intrapessoal e interpessoal, determinada pela Análise de Variância. A definição do número de dias para a estimativa da ingestão habitual de cada nutriente foi obtida considerando a correlação hipotética de (r>0,9, entre a verdadeira ingestão de nutrientes e a observada. RESULTADOS: As fontes de variância interpessoal foram maiores para todos os nutrientes e em ambos os sexos. As razões de variâncias foram <1 para todos os nutrientes, e mais elevadas no sexo feminino. Dois dias de recordatórios de 24 horas seriam suficientes para avaliar com precisão o consumo de energia, carboidratos, fibra, ácidos graxos saturados e monoinsaturados
Vincenza Di Stefano
2009-11-01
Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.
Facial Feature Extraction Method Based on Coefficients of Variances
Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang
2007-01-01
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.
Paul, Laiby; Smolders, Erik
2015-01-01
The anaerobic biotransformation of trichloroethylene (TCE) can be affected by competing electron acceptors such as Fe (III). This study assessed the role of Fe (III) reduction on the bioenhanced dissolution of TCE dense non-aqueous phase liquid (DNAPL). Columns were set up as 1-D diffusion cells consisting of a lower DNAPL layer, a layer with an aquifer substratum and an upper water layer that is regularly refreshed. The substrata used were either inert sand or sand coated with 2-line ferrihydrite (HFO) or two environmental Fe (III) containing samples. The columns were inoculated with KB-1 and were repeatedly fed with formate. In none of the diffusion cells, vinyl chloride or ethene was detected while dissolved and extractable Fe (II) increased strongly during 60 d of incubation. The cis-DCE concentration peaked at 4.0 cm from the DNAPL (inert sand) while it was at 3.4 cm (sand+HFO), 1.7 cm and 2.5 cm (environmental samples). The TCE concentration gradients near the DNAPL indicate that the DNAPL dissolution rate was larger than that in an abiotic cell by factors 1.3 (inert sand), 1.0 (sand+HFO) and 2.2 (both environmental samples). This results show that high bioavailable Fe (III) in HFO reduces the TCE degradation by competitive Fe (III) reduction, yielding lower bioenhanced dissolution. However, Fe (III) reduction in environmental samples was not reducing TCE degradation and the dissolution factor was even larger than that of inert sand. It is speculated that physical factors, e.g. micro-niches in the environmental samples protect microorganisms from toxic concentrations of TCE.
Standard Deviation for Small Samples
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Visschers, V H M; Backhans, A; Collineau, L; Iten, D; Loesken, S; Postma, M; Belloc, C; Dewulf, J; Emanuelson, U; Beilage, E Grosse; Siegrist, M; Sjölund, M; Stärk, K D C
2015-04-01
We conducted a survey among convenient samples of pig farmers (N=281) in Belgium, France, Germany, Sweden and Switzerland. We identified some significant differences among the five investigated countries (independent variable) regarding farmers' antimicrobial usage compared to their own country and worries related to pig farming (dependent variables), but most of the differences were rather small. In general, farmers perceived their own antimicrobial usage to be lower than that of their peers in the same country and lower than or similar to that of farmers from other countries. This may be a consequence of our convenience sample, resulting in self-selection of highly motivated farmers. Farmers were significantly more worried about financial/legal issues than about antimicrobial resistance. They believed that a reduction in revenues for slaughter pigs treated with a large amount of antimicrobials would have the most impact on reduced antimicrobial usage in their country. Further, farmers who were more worried about antimicrobial resistance and who estimated their own antimicrobial usage as lower than their fellow countrymen, perceived more impact from policy measures on the reduction of antimicrobials. Our results indicated that the same policy measures can be applied to reduce antimicrobial usage in pig farming in all five countries. Moreover, it seems worthwhile to increase pig farmers' awareness of the threat of antimicrobial resistance and its relation to antimicrobial usage; not only because pig farmers appeared little worried about antimicrobial usage but also because it affected farmers' perception of policy measures to reduce antimicrobial usage. Our samples were not representative for the national pig farmer populations. Further research is therefore needed to examine to what extent our findings can be generalised to these populations and to farmers in other countries.
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-08-01
Resource-constrained countries have difficulty conducting large EQ-5D valuation studies, which limits their ability to conduct cost-utility analyses using a value set specific to their own population. When estimates of similar but related parameters are available, shrinkage estimators reduce uncertainty and yield estimators with smaller mean square error (MSE). We hypothesized that health utilities based on shrinkage estimators can reduce MSE and mean absolute error (MAE) when compared to country-specific health utilities. We conducted a simulation study (1,000 iterations) based on the observed means and standard deviations (or standard errors) of the EQ-5D-3L valuation studies from 14 counties. In each iteration, the simulated data were fitted with the model based on the country-specific functional form of the scoring algorithm to create country-specific health utilities ("naïve" estimators). Shrinkage estimators were calculated based on the empirical Bayes estimation methods. The performance of shrinkage estimators was compared with those of the naïve estimators over a range of different sample sizes based on MSE, MAE, mean bias, standard errors and the width of confidence intervals. The MSE of the shrinkage estimators was smaller than the MSE of the naïve estimators on average, as theoretically predicted. Importantly, the MAE of the shrinkage estimators was also smaller than the MAE of the naïve estimators on average. In addition, the reduction in MSE with the use of shrinkage estimators did not substantially increase bias. The degree of reduction in uncertainty by shrinkage estimators is most apparent in valuation studies with small sample size. Health utilities derived from shrinkage estimation allow valuation studies with small sample size to "borrow strength" from other valuation studies to reduce uncertainty.
Xu Xinnan; Yao Suying; Xu Jiangtao; Nie Kaiming
2012-01-01
A switched-capacitor amplifier with an accurate gain of two that is insensitive to component mismatch is proposed.This structure is based on associating two sets of two capacitors in cross series during the amplification phase.This circuit permits the common-mode voltage of the sample signal to reach full swing.Using the chargecomplement technique,the proposed amplifier can reduce the impact of parasitic capacitors on the gain accuracy effectively.Simulation results show that as sample signal common-mode voltage changes,the difference between the minimum and maximum gain error is less than 0.03％.When the capacitor mismatch is increased from 0 to 0.2％,the gain error is deteriorated by 0.00015 ％.In all simulations,the gain of amplifier is 69 dB.
Heyer, Nicholas J; Derzon, James H; Winges, Linda; Shaw, Colleen; Mass, Diana; Snyder, Susan R; Epner, Paul; Nichols, James H; Gayken, Julie A; Ernst, Dennis; Liebow, Edward B
2012-09-01
To complete a systematic review of emergency department (ED) practices for reducing hemolysis in blood samples sent to the clinical laboratory for testing. A total of 16 studies met the review inclusion criteria (12 published and 4 unpublished). All 11 studies comparing new straight needle venipuncture with IV starts found a reduction in hemolysis rates, [average risk ratio of 0.16 (95% CI=0.11-0.24)]. Four studies on the effect of venipuncture location showed reduced hemolysis rates for the antecubital site [average risk ratio of 0.45 (95% CI=0.35-0.57]. Use of new straight needle venipuncture instead of IV starts is effective at reducing hemolysis rates in EDs, and is recommended as an evidence-based best practice. The overall strength of evidence rating is high and the effect size is substantial. Unpublished studies made an important contribution to the body of evidence. When IV starts must be used, observed rates of hemolysis may be substantially reduced by placing the IV at the antecubital site. © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Heyer, Nicholas J.; Derzon, James H.; Winges, Linda; Shaw, Colleen; Mass, Diana; Snyder, Susan R.; Epner, Paul; Nichols, James H.; Gayken, Julie A.; Ernst, Dennis; Liebow, Edward B.
2015-01-01
Objective To complete a systematic review of emergency department (ED) practices for reducing hemolysis in blood samples sent to the clinical laboratory for testing. Results A total of 16 studies met the review inclusion criteria (12 published and 4 unpublished). All 11 studies comparing new straight needle venipuncture with IV starts found a reduction in hemolysis rates, [average risk ratio of 0.16 (95% CI=0.11–0.24)]. Four studies on the effect of venipuncture location showed reduced hemolysis rates for the antecubital site [average risk ratio of 0.45 (95% CI=0.35–0.57]. Conclusions Use of new straight needle venipuncture instead of IV starts is effective at reducing hemolysis rates in EDs, and is recommended as an evidence-based best practice. The overall strength of evidence rating is high and the effect size is substantial. Unpublished studies made an important contribution to the body of evidence. When IV starts must be used, observed rates of hemolysis may be substantially reduced by placing the IV at the antecubital site. Disclaimer The findings and conclusions in this article are those of the authors and do not necessarily represent the official position of the CDC. PMID:22968086
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Inference of bioequivalence for log-normal distributed data with unspecified variances.
Xu, Siyan; Hua, Steven Y; Menton, Ronald; Barker, Kerry; Menon, Sandeep; D'Agostino, Ralph B
2014-07-30
Two drugs are bioequivalent if the ratio of a pharmacokinetic (PK) parameter of two products falls within equivalence margins. The distribution of PK parameters is often assumed to be log-normal, therefore bioequivalence (BE) is usually assessed on the difference of logarithmically transformed PK parameters (δ). In the presence of unspecified variances, test procedures such as two one-sided tests (TOST) use sample estimates for those variances; Bayesian models integrate them out in the posterior distribution. These methods limit our knowledge on the extent that inference about BE is affected by the variability of PK parameters. In this paper, we propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for δ. Demonstrated with published real-life data, the proposed method not only produces results that are same as TOST and comparable with Bayesian method but also helps identify ranges of variances, which could make the determination of BE more achievable. Our findings manifest the advantages of the proposed method in making inference about the extent that BE is affected by the unspecified variances, which cannot be accomplished either by TOST or Bayesian method.
Kuo, Tsung-Rong; Wang, Di-Yan; Chiu, Yu-Chen; Yeh, Yun-Chieh; Chen, Wei-Ting; Chen, Ching-Hui; Chen, Chun-Wei; Chang, Huan-Cheng; Hu, Cho-Chun; Chen, Chia-Chun
2014-01-27
This work demonstrated a simple platform for rapid and effective surface-assisted laser desorption/ionization time-of-flight mass spectrometry (SALDI-TOF MS) measurements based on the layer structure of reduced graphene oxide (rGO) and gold nanoparticles. A multi-layer thin film was fabricated by alternate layer-by-layer depositions of rGO and gold nanoparticles (LBL rGO/AuNP). The flat and clean two-dimensional film was served as the sample plate and also functioned as the matrix in SALDI-TOF MS. By simply one-step deposition of analytes onto the LBL rGO/AuNP sample plate, the MS measurements of various homogeneous samples were ready to execute. The optimization of MS signal was reached by the variation of the layer numbers of rGO and gold nanoparticles. Also, the small molecules including amino acids, carbohydrates and peptides were successfully analyzed in SALDI-TOF MS using the LBL rGO/AuNP sample plate. The results showed that the signal intensity, S N(-1) ratio and reproducibility of SALDI-TOF spectra have been significantly improved in comparison to the uses of gold nanoparticles or α-cyano-4-hydroxy-cinnamic acid (CHCA) as the assisted matrixes. Taking the advantages of the unique properties of rGO and gold nanoparticles, the ready-to-use MS sample plate, which could absorb and dissipate laser energy to analytes quite efficiently and homogeneously, has shown great commercial potentials for MS applications.
Hamsawahini, Kunashegaran; Sathishkumar, Palanivel; Ahamad, Rahmalan; Yusoff, Abdull Rahim Mohd
2015-11-01
In this study, a sensitive and cost-effective electrochemically reduced graphene oxide (ErGO) on graphite reinforced carbon (GRC) was developed for the detection of lead (Pb(II)) ions present in the real-life samples. A film of graphene oxide (GO) was drop-casted on GRC and their electrochemical properties were investigated using cyclic voltammetry (CV), amperometry and square wave voltammetry (SWV). Factors influencing the detection of Pb(II) ions, such as grades of GRC, constant applied cathodic potential (CACP), concentration of hydrochloric acid and drop-casting drying time were optimised. GO is irreversibly reduced in the range of -0.7 V to -1.6 V vs Ag/AgCl (3 M) in acidic condition. The results showed that the reduction behaviour of GO contributed to the high sensitivity of Pb(II) ions detection even at nanomolar level. The ErGO-GRC showed the detection limit of 0.5 nM and linear range of 3-15 nM in HCl (1 M). The developed electrode has potential to be a good candidate for the determination of Pb(II) ions in different aqueous system. The proposed method gives a good recovery rate of Pb(II) ions in real-life water samples such as tap water and river water.
Gene set analysis using variance component tests
2013-01-01
Background Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. Results We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). Conclusion We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data. PMID:23806107
Clarke, Peter; Varghese, Philip; Goldstein, David [ASE-EM Department, UT Austin, 210 East 24th St, C0600, Austin, TX 78712 (United States)
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
Uzunkaya, Fatih; Özden, Ahmet
2017-02-27
Fine-needle aspiration biopsy is an established method for the evaluation of thyroid nodules, but it has not been standardized worldwide yet. Adequacy of the aspirations is affected by several factors. The aim of this study is to determine the main factors affecting the adequacy and to suggest a procedural technique expected to reduce repeated procedures. A total of 393 aspiration procedures performed using either 22-gauge or 27-gauge needles were included in the study. The samplings were classified as inadequate or adequate according to the cytopathological reports, and results were compared. The rate of adequate samplings was higher in the 27-gauge group and the difference was statistically significant. Neither the size of nodules nor the number of slides used for smearing affected the adequacy. There was not a statistically significant relation between the needle size and the nodule size or the number of slides in terms of adequacy. Needle size is an important factor that affects the adequacy of samplings. The nodule size and the number of slides do not affect the adequacy. However, bloody and thicker smears are difficult for pathologists to evaluate and result in inadequacy.
Cheemalapati, Srikanth; Devadas, Balamurugan; Chen, Shen-Ming
2014-03-15
In this current study we used electrochemically active film which contains poly-L-methionine (PMET) and electrochemically reduced graphene oxide (ERGO) on glassy carbon electrode (GCE) for pyrazinamide (PZM) detection. The electrocatalytic response of analyte at PMET/ERGO/GCE film was measured using both cyclic voltammetry (CV) and differential pulse voltammetry (DPV). In addition, electrochemical impedance studies revealed that the smaller R(ct) value observed at PMET/ERGO film modified GCE which authenticates its good conductivity and faster electron transfer rate. The prepared PMET/ERGO/GCE film exhibits excellent DPV response towards PZM and the reduction peak current increased linearly with respect to PZM concentration in the linear range between 0.4 μM to 1129 μM with a sensitivity of 0.266 μA μM(-1) cm(-2). Real sample studies were carried out in human blood plasma and urine samples, which offered good recovery and revealed the promising practicality of the sensor for PZM detection. The proposed sensor displayed a good selectivity, repeatability, sensitivity with appreciable consistency and good reproducibility. In addition, the proposed electrochemical sensor showed good results towards the commercial pharmaceutical PZM samples.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-01-01
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600
Lu Wang
2016-12-01
Full Text Available The stability of a fiber optic gyroscope (FOG in measurement while drilling (MWD could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples, the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.
Gogate, Vibhav; Dechter, Rina
2012-01-01
The paper introduces AND/OR importance sampling for probabilistic graphical models. In contrast to importance sampling, AND/OR importance sampling caches samples in the AND/OR space and then extracts a new sample mean from the stored samples. We prove that AND/OR importance sampling may have lower variance than importance sampling; thereby providing a theoretical justification for preferring it over importance sampling. Our empirical evaluation demonstrates that AND/OR importance sampling is ...
Heo, Jun-Haeng; Boes, D. C.; Salas, J. D.
2001-02-01
Parameter estimation in a regional flood frequency setting, based on a Weibull model, is revisited. A two parameter Weibull distribution at each site, with common shape parameter over sites that is rationalized by a flood index assumption, and with independence in space and time, is assumed. The estimation techniques of method of moments and method of probability weighted moments are studied by proposing a family of estimators for each technique and deriving the asymptotic variance of each estimator. Then a single estimator and its asymptotic variance for each technique, suggested by trying to minimize the asymptotic variance over the family of estimators, is obtained. These asymptotic variances are compared to the Cramer-Rao Lower Bound, which is known to be the asymptotic variance of the maximum likelihood estimator. A companion paper considers the application of this model and these estimation techniques to a real data set. It includes a simulation study designed to indicate the sample size required for compatibility of the asymptotic results to fixed sample sizes.
Genetic factors explain half of all variance in serum eosinophil cationic protein.
Elmose, C; Sverrild, A; van der Sluis, S; Kyvik, K O; Backer, V; Thomsen, S F
2014-12-01
Eosinophil cationic protein (ECP) is one of four basic proteins of the secretory granules of eosinophils. It has a variety of functions associated with inflammatory responses. Little is known about the causes for variation in serum ECP levels. To identify factors associated with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. Sex (regression coefficient = -0.107, P variance component model, genetic factors accounted for 57% (CI: 42-72%, P variance in ECP levels, whereas the remainder (43%) was ascribable to non-shared environmental factors. The genetic correlation between ECP and airway responsiveness to methacholine was statistically non-significant (r = -0.11, P = 0.50). Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance. © 2014 John Wiley & Sons Ltd.
Least squares with non-normal data: estimating experimental variance functions.
Tellinghuisen, Joel
2008-02-01
Contrary to popular belief, the method of least squares (LS) does not require that the data have normally distributed (Gaussian) error for its validity. One practically important application of LS fitting that does not involve normal data is the estimation of data variance functions (VFE) from replicate statistics. If the raw data are normal, sampling estimates s(2) of the variance sigma(2) are chi(2) distributed. For small degrees of freedom, the chi(2) distribution is strongly asymmetrical -- exponential in the case of three replicates, for example. Monte Carlo computations for linear variance functions demonstrate that with proper weighting, the LS variance-function parameters remain unbiased, minimum-variance estimates of the true quantities. However, the parameters are strongly non-normal -- almost exponential for some parameters estimated from s(2) values derived from three replicates, for example. Similar LS estimates of standard deviation functions from estimated s values have a predictable and correctable bias stemming from the bias inherent in s as an estimator of sigma. Because s(2) and s have uncertainties proportional to their magnitudes, the VFE and SDFE fits require weighting as s(-4) and s(-2), respectively. However, these weights must be evaluated on the calculated functions rather than directly from the sampling estimates. The computation is thus iterative but usually converges in a few cycles, with remaining 'weighting' bias sufficiently small as to be of no practical consequence.
Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.
2012-01-01
Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.
Anatomic variance of the iliopsoas tendon.
Philippon, Marc J; Devitt, Brian M; Campbell, Kevin J; Michalski, Max P; Espinoza, Chris; Wijdicks, Coen A; Laprade, Robert F
2014-04-01
The iliopsoas tendon has been implicated as a generator of hip pain and a cause of labral injury due to impingement. Arthroscopic release of the iliopsoas tendon has become a preferred treatment for internal snapping hips. Traditionally, the iliopsoas tendon has been considered the conjoint tendon of the psoas major and iliacus muscles, although anatomic variance has been reported. The iliopsoas tendon consists of 2 discrete tendons in the majority of cases, arising from both the psoas major and iliacus muscles. Descriptive laboratory study. Fifty-three nonmatched, fresh-frozen, cadaveric hemipelvis specimens (average age, 62 years; range, 47-70 years; 29 male and 24 female) were used in this study. The iliopsoas muscle was exposed via a Smith-Petersen approach. A transverse incision across the entire iliopsoas musculotendinous unit was made at the level of the hip joint. Each distinctly identifiable tendon was recorded, and the distance from the lesser trochanter was recorded. The prevalence of a single-, double-, and triple-banded iliopsoas tendon was 28.3%, 64.2%, and 7.5%, respectively. The psoas major tendon was consistently the most medial tendinous structure, and the primary iliacus tendon was found immediately lateral to the psoas major tendon within the belly of the iliacus muscle. When present, an accessory iliacus tendon was located adjacent to the primary iliacus tendon, lateral to the primary iliacus tendon. Once considered a rare anatomic variant, the finding of ≥2 distinct tendinous components to the iliacus and psoas major muscle groups is an important discovery. It is essential to be cognizant of the possibility that more than 1 tendon may exist to ensure complete release during endoscopy. Arthroscopic release of the iliopsoas tendon is a well-accepted surgical treatment for iliopsoas impingement. The most widely used site for tendon release is at the level of the anterior hip joint. The findings of this novel cadaveric anatomy study suggest that
Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization
Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)
2016-01-01
textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main
Analysis of Variance with Summary Statistics in Microsoft® Excel®
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization
Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)
2016-01-01
textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main fac
Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)
Gaposchkin, E. M.
1972-01-01
The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.
Miller, Geoffrey F.; Penke, Lars
2007-01-01
Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…
2011-04-06
... pertains to the treatment of a hazardous waste generated by the Owens-Brockway Glass Container Company in... Variances for Hazardous Selenium Bearing Waste AGENCY: Environmental Protection Agency (EPA). ACTION... provides the best demonstrated treatment available for this waste by reducing the amount of selenium...
Miller, Geoffrey F.; Penke, Lars
2007-01-01
Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…
May, J. C.; Bourassa, M. A.
2010-12-01
Satellite measured winds, such as those reported by the SeaWinds scatterometer onboard the QuikSCAT satellite, can be validated with in situ data. The in situ data used for comparison with SeaWinds should be collocated in both time and space; however, due to the sparseness of data and time sampling intervals of the in situ data, ideally collocated observations are rare. Therefore, in situ data within a certain time and space range to the satellite overpass are used. This approach results in a total variance from three primary sources: variance in SeaWinds, variance in the comparison data, and variance associated with the temporal and spatial difference. The purpose of this study is to determine the amount of variance due to the temporal and spatial difference between two observations, in particular the equivalent neutral wind speed reported by SeaWinds and in situ data. Initially, this natural variability is examined in an idealized scenario where only in situ data is considered: the one-minute observations collected through the Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative from 2005 through 2009. The satellite is assumed to pass over the ship on the hour every hour. Shifts in time are used to examine the error associated with a mismatch in time. Taylor’s hypothesis can be used to translate a temporal shift to a spatial shift. The results show that the variance associated with the temporal difference increases as the mismatch in time increases. The temporal variance can also be separated into wind speed groups, which shows that there is a larger amount of variance associated with higher wind speeds. Confirmation of the idealized case method and results is done by using collocated SeaWinds and SAMOS observations. The comparison uses the closest collocation in both time and space to the satellite overpass. The total variance associated with a time shift from 0 to 60 minutes is estimated as the root mean square sum of the temporal
Stroes-Gascoyne, S.; Tait, J.C.; Porth, R.J.; McConnell, J.L.; Duclos, A.M. [Whiteshell Labs., Manitoba (Canada)
1994-12-31
The separate effects of alpha- and gamma-radiolysis on UO{sub 2} dissolution can be studied with unirradiated UO{sub 2}, whereas studies with used nuclear fuel necessarily always include both alpha- and gamma-radiolysis effects. This paper attempts to separate these effects by comparing the leaching behaviour in saline solution of a number of UO{sub 2} samples (each with a particular radiation characteristic or chemical property inherent to used fuel) with the leaching behaviour of used fuel. Data from leaching experiments with low- and high-burnup CANDU (CANada Deuterium Uranium) fuels are also compared. The results indicate that the presence of an alpha field at 100{degrees}C under reducing conditions does not increase UO{sub 2} dissolution but suggest that the combined effects of the beta and gamma fields in used CANDU fuel may enhance UO{sub 2} dissolution.
Rhinow, Daniel; Turchanin, Andrey; Gölzhäuser, Armin; Kühlbrandt, Werner; 10.1063/1.3645010
2011-01-01
For single particle electron cryo-microscopy (cryoEM), contrast loss due to beam-induced charging and specimen movement is a serious problem, as the thin films of vitreous ice spanning the holes of a holey carbon film are particularly susceptible to beam-induced movement. We demonstrate that the problem is at least partially solved by carbon nanotechnology. Doping ice-embedded samples with single-walled carbon nanotubes (SWNT) in aqueous suspension or adding nanocrystalline graphene supports, obtained by thermal conversion of cross-linked self-assembled biphenyl precursors, significantly reduces contrast loss in high-resolution cryoEM due to the excellent electrical and mechanical properties of SWNTs and graphene.
40 CFR 190.11 - Variances for unusual operations.
2010-07-01
... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
RR-Interval variance of electrocardiogram for atrial fibrillation detection
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Tokalıoğlu, Şerife; Yavuz, Emre; Aslantaş, Ayşe; Şahan, Halil; Taşkın, Ferhat; Patat, Şaban
2015-01-01
In this study, a fast and simple vortex assisted solid phase extraction method was developed for the separation/preconcentration of basic fuchsin in various water samples. The determination of basic fuchsin was carried out at a wavelength of 554 nm by spectrophotometry. Reduced graphene oxide which was used as a solid phase extractor was synthesized and characterized by X-ray diffraction, scanning electron microscopy and the Brunauer, Emmett and Teller. The optimum conditions are as follows: pH 2, contact times for adsorption and elution of 30 s and 90 s, respectively, 10 mg adsorbent, and eluent (ethanol) volume of 1 mL. The effects of some interfering ions and dyes were investigated. The method was linear in the concentration range of 50-250 μg L(-1). The adsorption capacity was 34.1 mg g(-1). The preconcentration factor, limit of detection and precision (RSD, %) of the method were found to be 400, 0.07 μg L(-1) and 1.2%, respectively. The described method was validated by analyzing basic fuchsin spiked certified reference material (SPS-WW1 Batch 114-Wastewater) and spiked real water samples.
Andre, F.; Cariou, R.; Antignac, J.P.; Le Bizec, B. [Ecole Nationale Veterinaire de Nantes (FR). Laboratoire d' Etudes des Residus et Contaminants dans les Aliments (LABERCA); Debrauwer, L.; Zalko, D. [Institut National de Recherches Agronomiques (INRA), 31-Toulouse (France). UMR 1089 Xenobiotiques
2004-09-15
The impact of brominated flame retardants on the environment and their potential risk for animal and human health is a present time concern for the scientific community. Numerous studies related to the detection of tetrabromobisphenol A (TBBP-A) and polybrominated diphenylethers (PBDEs) have been developed over the last few years; they were mainly based on GC-ECD, GC-NCI-MS or GC-EI-HRMS, and recently GC-EI-MS/MS. The sample treatment is usually derived from the analytical methods used for dioxins, but recently some authors proposed the utilisation of solid phase extraction (SPE) cartridges. In this study, a new analytical strategy is presented for the multi-residue analysis of TBBP-A and PBDEs from a unique reduced size sample. The main objective of this analytical development is to be applied for background exposure assessment of French population groups to brominated flame retardants, for which, to our knowledge, no data exist. A second objective is to provide an efficient analytical tool to study the transfer of these contaminants through the environment to living organisms, including degradation reactions and metabolic biotransformations.
Bayes linear adjustment for variance matrices
Wilkinson, Darren J
2008-01-01
We examine the problem of covariance belief revision using a geometric approach. We exhibit an inner-product space where covariance matrices live naturally --- a space of random real symmetric matrices. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability specifications.
Mean-Variance-Skewness-Entropy Measures: A Multi-Objective Approach for Portfolio Selection
Yeliz Mert Kantar
2011-01-01
Full Text Available In this study, we present a multi-objective approach based on a mean-variance-skewness-entropy portfolio selection model (MVSEM. In this approach, an entropy measure is added to the mean-variance-skewness model (MVSM to generate a well‑diversified portfolio. Through a variety of empirical data sets, we evaluate the performance of the MVSEM in terms of several portfolio performance measures. The obtained results show that the MVSEM performs well out-of sample relative to traditional portfolio selection models.
Petersen, Lars; Esbensen, Kim Harry
2005-01-01
the necessary SUO’s (dependent on the practical situation) is the only prerequisite needed for eliminating all sampling bias and simultaneously minimizing sampling variance, and this is in addition a sure guarantee for making the final analytical results trustworthy. No reliable conclusions can be made unless...
Forecasting the variance and return of Mexican financial series with symmetric GARCH models
Fátima Irina VILLALBA PADILLA
2013-03-01
Full Text Available The present research shows the application of the generalized autoregresive conditional heteroskedasticity models (GARCH in order to forecast the variance and return of the IPC, the EMBI, the weighted-average government funding rate, the fix exchange rate and the Mexican oil reference, as important tools for investment decisions. Forecasts in-sample and out-of-sample are performed. The covered period involves from 2005 to 2011.
Hall, T; Hall, Tim; Jewson, Stephen
2005-01-01
We describe results from the second stage of a project to build a statistical model for hurricane tracks. In the first stage we modelled the unconditional mean track. We now attempt to model the unconditional variance of fluctuations around the mean. The variance models we describe use a semi-parametric nearest neighbours approach in which the optimal averaging length-scale is estimated using a jack-knife out-of-sample fitting procedure. We test three different models. These models consider the variance structure of the deviations from the unconditional mean track to be isotropic, anisotropic but uncorrelated, and anisotropic and correlated, respectively. The results show that, of these models, the anisotropic correlated model gives the best predictions of the distribution of future positions of hurricanes.
Rui Zhang
2017-03-01
Full Text Available mRNA variance has been proposed to play key roles in normal development, population fitness, adaptability, and disease. While variance in gene expression levels may be beneficial for certain cellular processes, for example in a cell’s ability to respond to external stimuli, variance may be detrimental for the development of some organs. In the bilaterally symmetric vertebrate limb buds, the amount of Sonic Hedgehog (SHH protein present at specific stages of development is essential to ensure proper patterning of this structure. To our surprise, we found that SHH protein variance is present during the first 10 hr of limb development. The variance is virtually eliminated after the first 10 hr of limb development. By examining mutant animals, we determined that the ability of the limb bud apical ectodermal ridge (AER to respond to SHH protein was required for reducing SHH variance during limb formation. One consequence of the failure to eliminate variance in SHH protein was the presence of polydactyly and an increase in digit length. These data suggest a potential novel mechanism in which alterations in SHH variance during evolution may have driven changes in limb patterning and digit length.
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Error Variance of Rasch Measurement with Logistic Ability Distributions.
Dimitrov, Dimiter M.
Exact formulas for classical error variance are provided for Rasch measurement with logistic distributions. An approximation formula with the normal ability distribution is also provided. With the proposed formulas, the additive contribution of individual items to the population error variance can be determined without knowledge of the other test…
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
DELIVERY TIME VARIANCE REDUCTION IN THE MILITARY SUPPLY CHAIN THESIS...IN THE MILITARY SUPPLY CHAIN THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering...March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-OR-MS-ENS-10-02 DELIVERY TIME VARIANCE IN THE MILITARY SUPPLY CHAIN Preston
The asymptotic variance of departures in critically loaded queues
A. Al Hanbali; M.R.H. Mandjes (Michel); Y. Nazarathy (Yoni); W. Whitt
2010-01-01
htmlabstractWe consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case that the system load rho equals 1, and prove that the asymptotic variance rate satisfies lim_t Var D(t)/t = lambda
76 FR 78698 - Proposed Revocation of Permanent Variances
2011-12-19
... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...
Adjustment for heterogeneous variances due to days in milk and ...
ARC-IRENE
Adjustment of heterogeneous variances and a calving year effect in test-day ... Regression Test-Day Model (FRTDM), which assumes equal variances of the response variable at different .... random residual error .... records were included in the selection, while in the unadjusted data set, lactations consisting of six and more.
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.
Productive Failure in Learning the Concept of Variance
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
Time variance effects and measurement error indications for MLS measurements
Liu, Jiyuan
1999-01-01
Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....
Confidence Intervals of Variance Functions in Generalized Linear Model
Yong Zhou; Dao-ji Li
2006-01-01
In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.
Variance estimation for complex indicators of poverty and inequality using linearization techniques
Guillaume Osier
2009-12-01
Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.
Mendenhall, Marcus H
2011-01-01
In Monte-Carlo codes such as Geant4, it is often important to adjust reaction cross sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analogous Monte-Carlo. We present the theory and sample code for a Geant4 process which allows the cross section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross section change. This allows us to increase the cross section of nuclear reactions by factors exceeding 10^{4} (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful, for example, in problems which involve computation of particle penetration deep into a target, such as occurs in atmospheric showers or in shielding.
Mendenhall, Marcus H., E-mail: marcus.h.mendenhall@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States); Weller, Robert A., E-mail: robert.a.weller@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States)
2012-03-01
In Monte Carlo particle transport codes, it is often important to adjust reaction cross-sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analog Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross-section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross-section change. This makes it possible to increase the cross-section of nuclear reactions by factors exceeding 10{sup 4} (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful in problems that involve the computation of particle penetration deep into a target (e.g. atmospheric showers or shielding studies).
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Brad S. Coates
2003-09-01
Full Text Available PCR-based O. nubilalis population and pedigree analysis indicated female specificity of a (GAAAATn microsatellite, and male specificity of a CAYCARCGTCACTAA repeat unit marker. These loci were respectively named Ostrinia nubilalis W-chromosome 1 (ONW1 and O. nubilalis Z-chromosome 1 (ONZ1. Intact repeats of three, four, or five GAAAAT units are present among ONW1 alleles, and biallelic variation exists at the ONZ1 locus. Screening of 493 male at ONZ1 and 448 heterogametic females at ONZ1 and ONW1 loci from eleven North American sample sites was used to construct genotypic data. Analysis of molecular variance (AMOVA and F-statistics indicated no female haplotype or male ONZ1 allele frequency differentiation between voltinism ecotypes. Four subpopulations from northern latitudes, Minnesota and South Dakota, showed the absence of a single female haplotype, a significant deviation of ONZ1 data from Hardy-Weinberg expectation, and low-level geographic divergence from other subpopulations. Low ONZ1 and ONW1 allele diversity could be attributed either to large repeat unit sizes, low repeat number, reduced effective population (Ne size of sex chromosomes, or the result of recent O. nubilalis introduction and population expansion, but likely could not be due to inbreeding. These sequences have been deposited in GenBank AF442958, and AY102618 to AY102620.
Cosmic variance and the measurement of the local Hubble parameter.
Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel
2013-06-14
There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.
Analysis of variance in neuroreceptor ligand imaging studies.
Ji Hyun Ko
Full Text Available Radioligand positron emission tomography (PET with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA, and examine its feasibility using simulated [(11C]raclopride PET data. We also re-visit data from our previously published [(11C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.
On Eliminating The Scrambling Variance In Scrambled Response Models
Zawar Hussain
2012-06-01
Full Text Available To circumvent the response bias in sensitive surveys randomized response models are being used. To add into it we propose an improved response model utilizing both the additive and multiplicative scrambling method. The proposed model provides greater flexibility in terms of fixing the constantKdepending upon the guessed distribution of sensitive variable and nature of the population. The proposed model yields an unbiased estimator and is anticipated as more protective against the privacy of the respondents. The relative efficiency comparison of the proposed estimator is made relative to Hussain and Shabbir (2007 RRM. Furthermore, the proposed model itself is improved by taking the two responses from each respondent and suggesting a weighted estimator yielding an unbiased estimator having the minimum possible sampling variance. The suggested weighted estimator is unconditionally more efficient than all of the suggested estimators until now. Future research may be focused on privacy protection provided by the scrambling models. More scrambling models may be identified and improved by taking the two responses from each respondent in such a way that the scrambling effect is balanced out.
Analysis of variance in neuroreceptor ligand imaging studies.
Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P
2011-01-01
Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.
An efficiency comparison of control chart for monitoring process variance: Non-normality case
Sangkawanit, R.
2005-11-01
Full Text Available The purposes of this research are to investigate the relation between upper control limit and parameters of weighted moving variance linear weight control chart (WMVL, weighted moving variance: exponential weight control chart (WMVE , successive difference cumulative sum control chart (Cusum-SD and current sample mean cumulative sum control chart (Cusum-UM and to compare efficiencies of these control charts for monitoring increases in process variance, exponentially distributed data with unit variance and Student's t distributed data with variance 1.071429 (30 degrees of freedom as the in control process. Incontrol average run lengths (ARL0 of 200, 400 and 800 are considered. Out-of-control average run lengths (ARL1 obtained via simulation 10,000 times are used as a criteria.The main results are as follows: the upper control limit of WMVL has a negative relation with moving span while the upper control limit of WMVE has a negative relation with moving span and a positive relation with exponential weight. Both the upper control limits of Cusum-SD and Cusum-UM have a negative relation with reference value in which such relation looks like an exponential curve.The results of efficiency comparisons in case of exponentially distributed data for ARL0 of 200, 400 and 800 turned out to be quite similar. When standard deviation changes less than 50%, Cusum-SD control chart and Cusum-UM control chart have ARL1 less than those of WMVL control chart and WMVE control chart. However, when standard deviation changes more than 50%, WMVL control chart and WMVE control chart have ARL1 less than those of Cusum-SD control chart and Cusum-UM control chart. The results are different from the normally distributed data case, studied by Sparks in 2003. In case of Student's t distributed data for ARL0 of 200 and 400 when process variance shifts by a small amount (less than 50%, Cusum- UM control chart has the lowest ARL1 but when process variance shifts by a large amount
Athènes, Manuel; Terrier, Pierre
2017-05-01
Markov chain Monte Carlo methods are primarily used for sampling from a given probability distribution and estimating multi-dimensional integrals based on the information contained in the generated samples. Whenever it is possible, more accurate estimates are obtained by combining Monte Carlo integration and integration by numerical quadrature along particular coordinates. We show that this variance reduction technique, referred to as conditioning in probability theory, can be advantageously implemented in expanded ensemble simulations. These simulations aim at estimating thermodynamic expectations as a function of an external parameter that is sampled like an additional coordinate. Conditioning therein entails integrating along the external coordinate by numerical quadrature. We prove variance reduction with respect to alternative standard estimators and demonstrate the practical efficiency of the technique by estimating free energies and characterizing a structural phase transition between two solid phases.
Liu, Linjie; Gou, Yuqiang; Gao, Xia; Zhang, Pei; Chen, Wenxia; Feng, Shilan; Hu, Fangdi; Li, Yingdong
2014-09-01
An electrochemically reduced graphene oxide (ERGO) modified glassy carbon electrode (GCE) was used as a new voltammetric sensor for the determination of ferulic acid (FA). The morphology and microstructure of the modified electrodes were characterized by scanning electron microscopy (SEM) and Raman spectroscopy analysis, and the electrochemical effective surface areas of the modified electrodes were also calculated by chronocoulometry method. Sensing properties of the electrochemical sensor were investigated by means of cyclic voltammetry (CV) and differential pulse voltammetry (DPV). It was found that ERGO was electrodeposited on the surface of GCE by using potentiostatic method. The proposed electrode exhibited electrocatalytic activity to the redox of FA because of excellent electrochemical properties of ERGO. The transfer electron number (n), electrode reaction rate constant (ks) and electron-transfer coefficient (α) were calculated as 1.12, 1.24s(-1), and 0.40, respectively. Under the optimized conditions, the oxidation peak current was proportional to FA concentration at 8.49 × 10(-8)mol L(-1) to 3.89 × 10(-5)mol L(-1) with detection limit of 2.06 × 10(-8)mol L(-1). This fabricated sensor also displayed acceptable reproducibility, long-term stability, and high selectivity with negligible interferences from common interfering species. The voltammetric sensor was successfully applied to detect FA in A. sinensis and biological samples with recovery values in the range of 99.91%-101.91%.
Brandfass, Christoph; Karlovsky, Petr
2008-11-01
Fusarium graminearum Schwabe (Gibberella zeae Schwein. Petch.) and F. culmorum W.G. Smith are major mycotoxin producers in small-grain cereals afflicted with Fusarium head blight (FHB). Real-time PCR (qPCR) is the method of choice for species-specific, quantitative estimation of fungal biomass in plant tissue. We demonstrated that increasing the amount of plant material used for DNA extraction to 0.5-1.0 g considerably reduced sampling error and improved the reproducibility of DNA yield. The costs of DNA extraction at different scales and with different methods (commercial kits versus cetyltrimethylammonium bromide-based protocol) and qPCR systems (doubly labeled hybridization probes versus SYBR Green) were compared. A cost-effective protocol for the quantification of F. graminearum and F. culmorum DNA in wheat grain and maize stalk debris based on DNA extraction from 0.5-1.0 g material and real-time PCR with SYBR Green fluorescence detection was developed.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric ...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation......We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Variance of Fluctuating Radar Echoes from Thermal Noise and Randomly Distributed Scatterers
Marco Gabella
2014-02-01
Full Text Available In several cases (e.g., thermal noise, weather echoes, …, the incoming signal to a radar receiver can be assumed to be Rayleigh distributed. When estimating the mean power from the inherently fluctuating Rayleigh signals, it is necessary to average either the echo power intensities or the echo logarithmic levels. Until now, it has been accepted that averaging the echo intensities provides smaller variance values, for the same number of independent samples. This has been known for decades as the implicit consequence of two works that were presented in the open literature. The present note deals with the deriving of analytical expressions of the variance of the two typical estimators of mean values of echo power, based on echo intensities and echo logarithmic levels. The derived expressions explicitly show that the variance associated to an average of the echo intensities is lower than that associated to an average of logarithmic levels. Consequently, it is better to average echo intensities rather than logarithms. With the availability of digital IF receivers, which facilitate the averaging of echo power, the result has a practical value. As a practical example, the variance obtained from two sets of noise samples, is compared with that predicted with the analytical expression derived in this note (Section 3: the measurements and theory show good agreement.
CAIXA: a catalogue of AGN in the XMM-Newton archive III. Excess Variance Analysis
Ponti, Gabriele; Bianchi, Stefano; Guainazzi, Matteo; Matt, Giorgio; Uttley, Phil; Bonilla, Fonseca; Nuria,
2011-01-01
We report on the results of the first XMM systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM for more than 10 ks in pointed observations which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. We compute the excess variance for all AGN, on different time-scales (10, 20, 40 and 80 ks) and in different energy bands (0.3-0.7, 0.7-2 and 2-10 keV). We observe a highly significant and tight (~0.7 dex) correlation between excess variance and MBH. The subsample of reverberation mapped AGN shows an even smaller scatter (~0.45 dex) comparable to the one induced by the MBH uncertainties. This implies that X-ray variability can be used as an accurate tool to measure MBH and this method is more accurate than the ones based on single epoch optical spectra. The excess variance vs. accretion rate dependence is weaker than expected based on the PSD break frequency scaling, suggesting that both...
Effect of variance ratio on ANOVA robustness: Might 1.5 be the limit?
Blanca, María J; Alarcón, Rafael; Arnau, Jaume; Bono, Roser; Bendayan, Rebecca
2017-06-22
Inconsistencies in the research findings on F-test robustness to variance heterogeneity could be related to the lack of a standard criterion to assess robustness or to the different measures used to quantify heterogeneity. In the present paper we use Monte Carlo simulation to systematically examine the Type I error rate of F-test under heterogeneity. One-way, balanced, and unbalanced designs with monotonic patterns of variance were considered. Variance ratio (VR) was used as a measure of heterogeneity (1.5, 1.6, 1.7, 1.8, 2, 3, 5, and 9), the coefficient of sample size variation as a measure of inequality between group sizes (0.16, 0.33, and 0.50), and the correlation between variance and group size as an indicator of the pairing between them (1, .50, 0, -.50, and -1). Overall, the results suggest that in terms of Type I error a VR above 1.5 may be established as a rule of thumb for considering a potential threat to F-test robustness under heterogeneity with unequal sample sizes.
Variance decomposition of apolipoproteins and lipids in Danish twins
Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A
2007-01-01
OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...... been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...
Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model
Leunglung Chan; Eckhard Platen
2015-01-01
This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.
Campbell, Ruairidh D; Nouvellet, Pierre; Newman, Chris; Macdonald, David W; Rosell, Frank
2012-09-01
Ecologists are increasingly aware of the importance of environmental variability in natural systems. Climate change is affecting both the mean and the variability in weather and, in particular, the effect of changes in variability is poorly understood. Organisms are subject to selection imposed by both the mean and the range of environmental variation experienced by their ancestors. Changes in the variability in a critical environmental factor may therefore have consequences for vital rates and population dynamics. Here, we examine ≥90-year trends in different components of climate (precipitation mean and coefficient of variation (CV); temperature mean, seasonal amplitude and residual variance) and consider the effects of these components on survival and recruitment in a population of Eurasian beavers (n = 242) over 13 recent years. Within climatic data, no trends in precipitation were detected, but trends in all components of temperature were observed, with mean and residual variance increasing and seasonal amplitude decreasing over time. A higher survival rate was linked (in order of influence based on Akaike weights) to lower precipitation CV (kits, juveniles and dominant adults), lower residual variance of temperature (dominant adults) and lower mean precipitation (kits and juveniles). No significant effects were found on the survival of nondominant adults, although the sample size for this category was low. Greater recruitment was linked (in order of influence) to higher seasonal amplitude of temperature, lower mean precipitation, lower residual variance in temperature and higher precipitation CV. Both climate means and variance, thus proved significant to population dynamics; although, overall, components describing variance were more influential than those describing mean values. That environmental variation proves significant to a generalist, wide-ranging species, at the slow end of the slow-fast continuum of life histories, has broad implications for
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
minimum variance estimation of yield parameters of rubber tree with ...
2013-03-01
Mar 1, 2013 ... STAMP, an OxMetric modular software system for time series analysis, was used to estimate the yield ... derlying regression techniques. .... Kalman Filter Minimum Variance Estimation of Rubber Tree Yield Parameters. 83.
Detecting Pulsars with Interstellar Scintillation in Variance Images
Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E
2016-01-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
40 CFR 141.4 - Variances and exemptions.
2010-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... maintenance of the distribution system. ...
Fundamental Indexes As Proxies For Mean-Variance Efficient Portfolios
Kathleen Hodnett; Gearé Botes; Khumbudzo Daswa; Kimberly Davids; Emmanuel Che Fongwa; Candice Fortuin
2014-01-01
Mean-variance efficiency was first explained by Markowitz (1952) who derived an efficient frontier comprised of portfolios with the highest expected returns for a given level of risk borne by the investor...
Estimating the generalized concordance correlation coefficient through variance components.
Carrasco, Josep L; Jover, Lluís
2003-12-01
The intraclass correlation coefficient (ICC) and the concordance correlation coefficient (CCC) are two of the most popular measures of agreement for variables measured on a continuous scale. Here, we demonstrate that ICC and CCC are the same measure of agreement estimated in two ways: by the variance components procedure and by the moment method. We propose estimating the CCC using variance components of a mixed effects model, instead of the common method of moments. With the variance components approach, the CCC can easily be extended to more than two observers, and adjusted using confounding covariates, by incorporating them in the mixed model. A simulation study is carried out to compare the variance components approach with the moment method. The importance of adjusting by confounding covariates is illustrated with a case example.
Lauer, H. V. Jr.; Ming, D. W.; Sutter, B.; Mahaffy, P. R.
2010-01-01
The Mars Science Laboratory (MSL) is scheduled for launch in 2011. The science objectives for MSL are to assess the past or present biological potential, to characterize the geology, and to investigate other planetary processes that influence habitability at the landing site. The Sample Analysis at Mars (SAM) is a key instrument on the MSL payload that will explore the potential habitability at the landing site [1]. In addition to searching for organic compounds, SAM will have the capability to characterized evolved gases as a function of increasing temperature and provide information on the mineralogy of volatile-bearing phases such as carbonates, sulfates, phyllosilicates, and Fe-oxyhydroxides. The operating conditions in SAM ovens will be maintained at 30 mb pressure with a He carrier gas flowing at 1 sccm. We have previously characterized the thermal and evolved gas behaviors of volatile-bearing species under reduced pressure conditions that simulated operating conditions of the Thermal and Evolved Gas Analyzer (TEGA) that was onboard the 2007 Mars Phoenix Scout Mission [e.g., 2-8]. TEGA ovens operated at 12 mb pressure with a N2 carrier gas flowing at 0.04 sccm. Another key difference between SAM and TEGA is that TEGA was able to perform differential scanning calorimetry whereas SAM only has a pyrolysis oven. The operating conditions for TEGA and SAM have several key parameter differences including operating pressure (12 vs 30 mb), carrier gas (N2 vs. He), and carrier gas flow rate (0.04 vs 1 sccm). The objectives of this study are to characterize the thermal and evolved gas analysis of calcite under SAM operating conditions and then compare it to calcite thermal and evolved gas analysis under TEGA operating conditions.
Dimension free and infinite variance tail estimates on Poisson space
Breton, J. C.; Houdré, C.; Privault, N.
2004-01-01
Concentration inequalities are obtained on Poisson space, for random functionals with finite or infinite variance. In particular, dimension free tail estimates and exponential integrability results are given for the Euclidean norm of vectors of independent functionals. In the finite variance case these results are applied to infinitely divisible random variables such as quadratic Wiener functionals, including L\\'evy's stochastic area and the square norm of Brownian paths. In the infinite vari...
The asymptotic variance of departures in critically loaded queues
Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.
2011-01-01
We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +
Wavelet Variance Analysis of EEG Based on Window Function
ZHENG Yuan-zhuang; YOU Rong-yi
2014-01-01
A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Multiperiod mean-variance efficient portfolios with endogenous liabilities
Markus LEIPPOLD; Trojani, Fabio; Vanini, Paolo
2011-01-01
We study the optimal policies and mean-variance frontiers (MVF) of a multiperiod mean-variance optimization of assets and liabilities (AL). This makes the analysis more challenging than for a setting based on purely exogenous liabilities, in which the optimization is only performed on the assets while keeping liabilities fixed. We show that, under general conditions for the joint AL dynamics, the optimal policies and the MVF can be decomposed into an orthogonal set of basis returns using exte...
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Testing for Causality in Variance Usinf Multivariate GARCH Models
Christian M. Hafner; Herwartz, Helmut
2008-01-01
Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...
Testing for causality in variance using multivariate GARCH models
Hafner, Christian; Herwartz, H.
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...
Travers, L M; Simmons, L W; Garcia-Gonzalez, F
2016-05-01
Polyandry is widespread despite its costs. The sexually selected sperm hypotheses ('sexy' and 'good' sperm) posit that sperm competition plays a role in the evolution of polyandry. Two poorly studied assumptions of these hypotheses are the presence of additive genetic variance in polyandry and sperm competitiveness. Using a quantitative genetic breeding design in a natural population of Drosophila melanogaster, we first established the potential for polyandry to respond to selection. We then investigated whether polyandry can evolve through sexually selected sperm processes. We measured lifetime polyandry and offensive sperm competitiveness (P2 ) while controlling for sampling variance due to male × male × female interactions. We also measured additive genetic variance in egg-to-adult viability and controlled for its effect on P2 estimates. Female lifetime polyandry showed significant and substantial additive genetic variance and evolvability. In contrast, we found little genetic variance or evolvability in P2 or egg-to-adult viability. Additive genetic variance in polyandry highlights its potential to respond to selection. However, the low levels of genetic variance in sperm competitiveness suggest that the evolution of polyandry may not be driven by sexy sperm or good sperm processes. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
The phenome-wide distribution of genetic variance.
Blows, Mark W; Allen, Scott L; Collet, Julie M; Chenoweth, Stephen F; McGuigan, Katrina
2015-07-01
A general observation emerging from estimates of additive genetic variance in sets of functionally or developmentally related traits is that much of the genetic variance is restricted to few trait combinations as a consequence of genetic covariance among traits. While this biased distribution of genetic variance among functionally related traits is now well documented, how it translates to the broader phenome and therefore any trait combination under selection in a given environment is unknown. We show that 8,750 gene expression traits measured in adult male Drosophila serrata exhibit widespread genetic covariance among random sets of five traits, implying that pleiotropy is common. Ultimately, to understand the phenome-wide distribution of genetic variance, very large additive genetic variance-covariance matrices (G) are required to be estimated. We draw upon recent advances in matrix theory for completing high-dimensional matrices to estimate the 8,750-trait G and show that large numbers of gene expression traits genetically covary as a consequence of a single genetic factor. Using gene ontology term enrichment analysis, we show that the major axis of genetic variance among expression traits successfully identified genetic covariance among genes involved in multiple modes of transcriptional regulation. Our approach provides a practical empirical framework for the genetic analysis of high-dimensional phenome-wide trait sets and for the investigation of the extent of high-dimensional genetic constraint.
Analytic variance estimates of Swank and Fano factors.
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-01
Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen;
2016-01-01
relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. METHODS: In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed....... The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components....
Daniel Bartz; Kerr Hatrick; Hesse, Christian W.; Klaus-Robert M\\"uller; Steven Lemm
2011-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on Factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, w...
A Study on the Chain Ratio-Type Estimator of Finite Population Variance
Yunusa Olufadi
2014-01-01
Full Text Available We suggest an estimator using two auxiliary variables for the estimation of the unknown population variance. The bias and the mean square error of the proposed estimator are obtained to the first order of approximations. In addition, the problem is extended to two-phase sampling scheme. After theoretical comparisons, as an illustration, a numerical comparison is carried out to examine the performance of the suggested estimator with several estimators.
Georgy Shevlyakov; Kiseon Kim
2005-01-01
A brief survey of former and recent results on Huber's minimax approach in robust statistics is given. The least informative distributions minimizing Fisher information for location over several distribution classes with upper-bounded variances and subranges are written down. These least informative distributions are qualitatively different from classical Huber's solution and have the following common structure: (i) with relatively small variances they are short-tailed, in particular normal; (ii) with relatively large variances they are heavytailed, in particular the Laplace; (iii) they are compromise with relatively moderate variances. These results allow to raise the efficiency of minimax robust procedures retaining high stability as compared to classical Huber's procedure for contaminated normal populations. In application to signal detection problems, the proposed minimax detection rule has proved to be robust and close to Huber's for heavy-tailed distributions and more efficient than Huber's for short-tailed ones both in asymptotics and on finite samples.
Cox, M
2007-05-01
Small populations are dominated by unique patterns of variance, largely characterized by rapid drift of allele frequencies. Although the variance components of genetic datasets have long been recognized, most population genetic studies still treat all sampling locations equally despite differences in sampling and effective population sizes. Because excluding the effects of variance can lead to significant biases in historical reconstruction, variance components should be incorporated explicitly into population genetic analyses. The possible magnitude of variance effects in small populations is illustrated here via a case study of Y-chromosome haplogroup diversity in the Vanuatu Archipelago. Deme-based modelling is used to simulate allele frequencies through time, and conservative confidence bounds are placed on the accumulation of stochastic variance effects, including diachronic genetic drift and contemporary sampling error. When the information content of the dataset has been ascertained, demographic models with parameters falling outside the confidence bounds of the variance components can then be accepted with some statistical confidence. Here I emphasize how aspects of the demographic history of a population can be disentangled from stochastic variance effects, and I illustrate the extreme roles of genetic drift and sampling error for many small human population datasets.
Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River
Whitcher, B.; Byers, S. D.; Guttorp, P.; Percival, D. B.
2002-05-01
We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622-1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622-1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years.
Monika eFleischhauer
2013-09-01
Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
CMB-S4 and the hemispherical variance anomaly
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
Fertilization success and the estimation of genetic variance in sperm competitiveness.
Garcia-Gonzalez, Francisco; Evans, Jonathan P
2011-03-01
A key question in sexual selection is whether the ability of males to fertilize eggs under sperm competition exhibits heritable genetic variation. Addressing this question poses a significant problem, however, because a male's ability to win fertilizations ultimately depends on the competitive ability of rival males. Attempts to partition genetic variance in sperm competitiveness, as estimated from measures of fertilization success, must therefore account for stochastic effects due to the random sampling of rival sperm competitors. In this contribution, we suggest a practical solution to this problem. We advocate the use of simple cross-classified breeding designs for partitioning sources of genetic variance in sperm competitiveness and fertilization success and show how these designs can be used to avoid stochastic effects due to the random sampling of rival sperm competitors. We illustrate the utility of these approaches by simulating various scenarios for estimating genetic parameters in sperm competitiveness, and show that the probability of detecting additive genetic variance in this trait is restored when stochastic effects due to the random sampling of rival sperm competitors are controlled. Our findings have important implications for the study of the evolutionary maintenance of polyandry.
Kang, Je-Won; Ryu, Soo-Kyung
2017-02-01
In this paper a sample-adaptive prediction technique is proposed to yield efficient coding performance in an intracoding for screen content video coding. The sample-based prediction is to reduce spatial redundancies in neighboring samples. To this aim, the proposed technique uses a weighted linear combination of neighboring samples and applies the robust optimization technique, namely, ridge estimation to derive the weights in a decoder side. The ridge estimation uses L2 norm based regularization term, and, thus the solution is more robust to high variance samples such as in sharp edges and high color contrasts exhibited in screen content videos. It is demonstrated with the experimental results that the proposed technique provides an improved coding gain as compared to the HEVC screen content video coding reference software.
Manel Puig-Vidal
2012-01-01
Full Text Available The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4x factor.
Variance-based fingerprint distance adjustment algorithm for indoor localization
Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang
2015-01-01
The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.
Sensitivity to Estimation Errors in Mean-variance Models
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Expectation Values and Variance Based on Lp-Norms
George Livadiotis
2012-11-01
Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.
CMB-S4 and the Hemispherical Variance Anomaly
O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D
2016-01-01
Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...
Variance inflation in high dimensional Support Vector Machines
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Variance swap payoffs, risk premia and extreme market conditions
Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco
This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic...... constraint. Our approach, only requiring option implied volatilities and daily returns for the underlying, provides measurement error free estimates of the part of the VRP related to normal market conditions, and allows constructing variables indicating agents' expectations under extreme market conditions....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Saturation of number variance in embedded random-matrix ensembles
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Lemme, Francesca; van Breukelen, Gerard J P; Candel, Math J J M; Berger, Martijn P F
2015-10-01
Sample size calculation for cluster randomized trials (CRTs) with a [Formula: see text] factorial design is complicated due to the combination of nesting (of individuals within clusters) with crossing (of two treatments). Typically, clusters and individuals are allocated across treatment conditions in a balanced fashion, which is optimal under homogeneity of variance. However, the variance is likely to be heterogeneous if there is a treatment effect. An unbalanced allocation is then more efficient, but impractical because the optimal allocation depends on the unknown variances. Focusing on CRTs with a [Formula: see text] design, this paper addresses two questions: How much efficiency is lost by having a balanced design when the outcome variance is heterogeneous? How large must the sample size be for a balanced allocation to have sufficient power under heterogeneity of variance? We consider different scenarios of heterogeneous variance. Within each scenario, we determine the relative efficiency of a balanced design, as a function of the level (cluster, individual, both) and amount of heterogeneity of the variance. We then provide a simple correction of the sample size for the loss of power due to heterogeneity of variance when a balanced allocation is used. The theory is illustrated with an example of a published 2 x2 CRT.
Smedslund Geir; Zangi Heidi Andersen; Mowinckel Petter; Hagen Kåre Birger
2013-01-01
Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71)...
Variance squeezing and entanglement of the XX central spin model
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Recursive identification for multidimensional ARMA processes with increasing variances
CHEN Hanfu
2005-01-01
In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β＞ 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.
Levine's guide to SPSS for analysis of variance
Braver, Sanford L; Page, Melanie
2003-01-01
A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi
Asymptotic variance of grey-scale surface area estimators
Svane, Anne Marie
Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....
Precise Asymptotics of Error Variance Estimator in Partially Linear Models
Shao-jun Guo; Min Chen; Feng Liu
2008-01-01
In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known principle of least-squares. With this method the estimation of the (co)variance components is based on a linear model of observation equations. The method is flexible since it works with a user-defined we...
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
On Variance and Covariance for Bounded Linear Operators
Chia Shiang LIN
2001-01-01
In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.
An assessment of fine-needle sampling techniques.
Titoria, Puneet; Siva, Thiru M; Malik, Tass
2010-07-01
Fine-needle cytology sampling, when adequate, is highly sensitive and specific for tissue-type diagnosis, with figures of 94% and 88%, respectively. This study explores the technique of sampling to reduce interoperator variability and ensure maximal tissue yield. Apple cortical tissue was sampled as a proxy of human lymph node. A total of 200 samples, by four methods, with 50 by each sampling method, were taken using blue venepuncture needles and weighed to assess tissue yield. Results were analysed using one-way analysis of variance and Tukey's HSD test. Comparable yields, by mass, were achieved by both straight lance and coring techniques (P > 0.05). Significantly greater yield was achieved with a multiplanar technique (P sampling increases mass yield of tissue in fine-needle sampling. Coring appears to have little bearing on yield.
THE VARIANCE AND TREND OF INTEREST RATE – CASE OF COMMERCIAL BANKS IN KOSOVO
Fidane Spahija
2015-09-01
Full Text Available Today’s debate on the interest rate is characterized by three key issues: the interest rate as a phenomenon, the interest rate as a product of factors (dependent variable, and the interest rate as a policy instrument (independent variable. In this article, the variance in interest rates, as the dependent variable, comes in two statistical sizes: the variance and trend. The interest rates include the price of loans and deposits. The analysis of interest rates on deposits and loan is conducted for non-financial corporation and family economy. This study looks into a statistical analysis, to highlight the variance and trends of interest rates for the period 2004-2013, for deposits and loans in commercial banks in Kosovo. The interest rate is observed at various levels. Is it high, medium or low? Does it explain growth trends, keep constant, or reduce? The trend is observed whether commercial banks maintain, reduce, or increase the interest rate in response to the policy that follows the Central Bank of Kosovo. The data obtained will help to determine the impact of interest rate in the service sector, investment, consumption, and unemployment.
Influence of monte carlo variance with fluence smoothing in VMAT treatment planning with Monaco TPS
B Sarkar
2016-01-01
Full Text Available Introduction: The study aimed to investigate the interplay between Monte Carlo Variance (MCV and fluence smoothing factor (FSF in volumetric modulated arc therapy treatment planning by using a sample set of complex treatment planning cases and a X-ray Voxel Monte Carlo–based treatment planning system equipped with tools to tune fluence smoothness as well as MCV. Materials and Methods: The dosimetric (dose to tumor volume, and organ at risk and physical characteristic (treatment time, number of segments, and so on of a set 45 treatment plans for all combinations of 1%, 3%, 5% MCV and 1, 3, 5 FSF were evaluated for five carcinoma esophagus cases under the study. Result: Increase in FSF reduce the treatment time. Variation of MCV and FSF gives a highest planning target volume (PTV, heart and lung dose variation of 3.6%, 12.8% and 4.3%, respectively. The heart dose variation was highest among all organs at risk. Highest variation of spinal cord dose was 0.6 Gy. Conclusion: Variation of MCV and FSF influences the organ at risk (OAR doses significantly but not PTV coverage and dose homogeneity. Variation in FSF causes difference in dosimetric and physical parameters for the treatment plans but variation of MCV does not. MCV 3% or less do not improve the plan quality significantly (physical and clinical compared with MCV greater than 3%. The use of MCV between 3% and 5% gives similar results as 1% with lesser calculation time. Minimally detected differences in plan quality suggest that the optimum FSF can be set between 3 and 5.
Multilevel variance estimators in MLMC and application for random obstacle problems
Chernov, Alexey
2014-01-06
The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.
Variance and covariance of actual relationships between relatives at one locus.
Garcia-Cortes, Luis Alberto; Legarra, Andres; Chevalet, Claude; Toro, Miguel Angel
2013-01-01
The relationship between pairs of individuals is an important topic in many areas of population and quantitative genetics. It is usually measured as the proportion of the genome identical by descent shared by the pair and it can be inferred from pedigree information. But there is a variance in actual relationships as a consequence of mendelian sampling, whose general formula has not been developed. The goal of this work is to develop this general formula for the one-locus situation,. We provide simple expressions for the variances and covariances of all actual relationships in an arbitrary complex pedigree. The proposed method relies on the use of the nine identity coefficients and the generalized relationship coefficients; formulas have been checked by computer simulation. Finally two examples for a short pedigree of dogs and a long pedigree of sheep are given.
Weber, Elke U; Shafir, Sharoni; Blais, Ann-Renee
2004-04-01
This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk preference, the superiority of the CV over variance in predicting risk taking is not as strong. Two experiments show that people's risk sensitivity becomes strongly proportional to the CV when they learn about choice alternatives like other animals, by experiential sampling over time. Experience-based choices differ from choices when outcomes and probabilities are numerically described. Zipf's law as an ecological regularity and Weber's law as a psychological regularity may give rise to the CV as a measure of risk.
Variance Estimation Using Refitted Cross-validation in Ultrahigh Dimensional Regression
Fan, Jianqing; Hao, Ning
2010-01-01
Variance estimation is a fundamental problem in statistical modeling. In ultrahigh dimensional linear regressions where the dimensionality is much larger than sample size, traditional variance estimation techniques are not applicable. Recent advances on variable selection in ultrahigh dimensional linear regressions make this problem accessible. One of the major problems in ultrahigh dimensional regression is the high spurious correlation between the unobserved realized noise and some of the predictors. As a result, the realized noises are actually predicted when extra irrelevant variables are selected, leading to serious underestimate of the noise level. In this paper, we propose a two-stage refitted procedure via a data splitting technique, called refitted cross-validation (RCV), to attenuate the influence of irrelevant variables with high spurious correlations. Our asymptotic results show that the resulting procedure performs as well as the oracle estimator, which knows in advance the mean regression functi...
Image fractal coding algorithm based on complex exponent moments and minimum variance
Yang, Feixia; Ping, Ziliang; Zhou, Suhua
2017-02-01
Image fractal coding possesses very high compression ratio, the main problem is low speed of coding. The algorithm based on Complex Exponent Moments(CEM) and minimum variance is proposed to speed up the fractal coding compression. The definition of CEM and its FFT algorithm are presented, and the multi-distorted invariance of CEM are discussed. The multi-distorted invariance of CEM is fit to the fractal property of an image. The optimal matching pair of range blocks and domain blocks in an image is determined by minimizing the variance of their CEM. Theory analysis and experimental results have proved that the algorithm can dramatically reduce the iteration time and speed up image encoding and decoding process.
Aoki, Yasunori; Nordgren, Rikard; Hooker, Andrew C
2016-03-01
As the importance of pharmacometric analysis increases, more and more complex mathematical models are introduced and computational error resulting from computational instability starts to become a bottleneck in the analysis. We propose a preconditioning method for non-linear mixed effects models used in pharmacometric analyses to stabilise the computation of the variance-covariance matrix. Roughly speaking, the method reparameterises the model with a linear combination of the original model parameters so that the Hessian matrix of the likelihood of the reparameterised model becomes close to an identity matrix. This approach will reduce the influence of computational error, for example rounding error, to the final computational result. We present numerical experiments demonstrating that the stabilisation of the computation using the proposed method can recover failed variance-covariance matrix computations, and reveal non-identifiability of the model parameters.
Variance reduction technique in a beta radiation beam using an extrapolation chamber.
Polo, Ivón Oramas; Souza Santos, William; de Lara Antonio, Patrícia; Caldas, Linda V E
2017-10-01
This paper aims to show how the variance reduction technique "Geometry splitting/Russian roulette" improves the statistical error and reduces uncertainties in the determination of the absorbed dose rate in tissue using an extrapolation chamber for beta radiation. The results show that the use of this technique can increase the number of events in the chamber cavity leading to a closer approximation of simulation result with the physical problem. There was a good agreement among the experimental measurements, the certificate of manufacture and the simulation results of the absorbed dose rate values and uncertainties. The absorbed dose rate variation coefficient using the variance reduction technique "Geometry splitting/Russian roulette" was 2.85%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sabadini, Edvaldo; Silva, Marcelo Alves da [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Ziglio, Claudio Marcos; Carvalho, Carlos Henrique Monteiro de; Rocha, Nelson de Oliveira [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES)
2008-07-01
In this work the efficiency of five commercial additives which produce drag reduction in petroleum was determined and compared. The studies were carried out in a rheometer using samples of petroleum from Bacia de Campos diluted in 50% of toluene. For such purpose the rheometer acts as a 'torquemeter', in which the magnitude of the drag reduction promoted by the additive is directly proportional to the difference in torque applied to maintain the sample in a specific flow rate. The obtained results have shown excellent capability of the additives to promote drag reduction (up to 20%) and small difference of efficiency among the additives was detectable. (author)
Pearcy, Benjamin T D; McEvoy, Peter M; Roberts, Lynne D
2017-02-01
This study extends knowledge about the relationship of Internet Gaming Disorder (IGD) to other established mental disorders by exploring comorbidities with anxiety, depression, Attention Deficit Hyperactivity Disorder (ADHD), and obsessive compulsive disorder (OCD), and assessing whether IGD accounts for unique variance in distress and disability. An online survey was completed by a convenience sample that engages in Internet gaming (N = 404). Participants meeting criteria for IGD based on the Personal Internet Gaming Disorder Evaluation-9 (PIE-9) reported higher comorbidity with depression, OCD, ADHD, and anxiety compared with those who did not meet the IGD criteria. IGD explained a small proportion of unique variance in distress (1%) and disability (3%). IGD accounted for a larger proportion of unique variance in disability than anxiety and ADHD, and a similar proportion to depression. Replications with clinical samples using longitudinal designs and structured diagnostic interviews are required.
Unobserved heterogeneity and risk in wage variance: Does more schooling reduce earnings risk?
J. Mazza; H. van Ophem; J. Hartog
2013-01-01
We apply a recently proposed method to disentangle unobserved heterogeneity from risk in returns to education to data for the USA, the UK and Germany. We find that in residual wage variation, uncertainty by far dominates unobserved heterogeneity. The relation between uncertainty and level of educati
Petersen, K.; Leah, R.; Knudsen, S.
2002-01-01
Nuclear matrix attachment regions (MARs) are defined as genomic DNA sequences, located at the physical boundaries of chromatin loops. They are suggested to play a role in the cis unfolding and folding of the chromatin fibre associated with the regulation of gene transcription. Inclusion of MARs i...
Meuter, Matthew L.; Chapman, Kenneth J.; Toy, Daniel; Wright, Lauren K.; McGowan, William
2009-01-01
This article describes a standardization process for an introductory marketing course with multiple sections. The authors first outline the process used to develop a standardized set of marketing concepts to be used in all introductory marketing classes. They then discuss the benefits to both students and faculty that occur as a result of…
Shared genetic variance between obesity and white matter integrity in Mexican Americans
Spieker, Elena A.; Kochunov, Peter; Rowland, Laura M.; Sprooten, Emma; Winkler, Anderson M.; Olvera, Rene L.; Almasy, Laura; Duggirala, Ravi; Fox, Peter T.; Blangero, John; Glahn, David C.; Curran, Joanne E.
2015-01-01
Obesity is a chronic metabolic disorder that may also lead to reduced white matter integrity, potentially due to shared genetic risk factors. Genetic correlation analyses were conducted in a large cohort of Mexican American families in San Antonio (N = 761, 58% females, ages 18–81 years; 41.3 ± 14.5) from the Genetics of Brain Structure and Function Study. Shared genetic variance was calculated between measures of adiposity [(body mass index (BMI; kg/m2) and waist circumference (WC; in)] and whole-brain and regional measurements of cerebral white matter integrity (fractional anisotropy). Whole-brain average and regional fractional anisotropy values for 10 major white matter tracts were calculated from high angular resolution diffusion tensor imaging data (DTI; 1.7 × 1.7 × 3 mm; 55 directions). Additive genetic factors explained intersubject variance in BMI (heritability, h2 = 0.58), WC (h2 = 0.57), and FA (h2 = 0.49). FA shared significant portions of genetic variance with BMI in the genu (ρG = −0.25), body (ρG = −0.30), and splenium (ρG = −0.26) of the corpus callosum, internal capsule (ρG = −0.29), and thalamic radiation (ρG = −0.31) (all p's = 0.043). The strongest evidence of shared variance was between BMI/WC and FA in the superior fronto-occipital fasciculus (ρG = −0.39, p = 0.020; ρG = −0.39, p = 0.030), which highlights region-specific variation in neural correlates of obesity. This may suggest that increase in obesity and reduced white matter integrity share common genetic risk factors. PMID:25763009
Shared genetic variance between obesity and white matter integrity in Mexican-americans
Elena A Spieker
2015-02-01
Full Text Available Obesity is a chronic metabolic disorder that may also lead to reduced white matter integrity, potentially due to shared genetic risk factors. Genetic correlation analyses were conducted in a large cohort of Mexican American families in San Antonio (N=761, 58% females, ages 18-81y; 41.3±14.5 from the Genetics of Brain Structure and Function Study. Shared genetic variance was calculated between measures of adiposity ((body mass index (BMI; kg/m2 and waist circumference (WC; in and whole-brain and regional measurements of cerebral white matter integrity (fractional anisotropy. Whole-brain average and regional fractional anisotropy values for ten major white matter tracts were calculated from high angular resolution diffusion tensor imaging data (DTI; 1.7×1.7×3 mm; 55 directions. Additive genetic factors explained intersubject variance in BMI (heritability, h2=0.58, WC (h2=0.57, and FA (h2=0.49. FA shared significant portions of genetic variance with BMI in the genu (ρG = -0.25, body (ρG = -0.30, and splenium (ρG = -0.26 of the corpus callosum, internal capsule (ρG = -0.29, and thalamic radiation (ρG = -0.31 (all p’s = .043. The strongest evidence of shared variance was between BMI/WC and FA in the superior fronto-occipital fasciculus (ρG = -0.39, p = .020; ρG = -0.39, p = .030, which highlights region-specific variation in neural correlates of obesity. This may suggest that increase in obesity and reduced white matter integrity share common genetic risk factors.
An entropy approach to size and variance heterogeneity
Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.
2012-01-01
In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity si
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the
Permutation tests for multi-factorial analysis of variance
Anderson, M.J.; Braak, ter C.J.F.
2003-01-01
Several permutation strategies are often possible for tests of individual terms in analysis-of-variance (ANOVA) designs. These include restricted permutations, permutation of whole groups of units, permutation of some form of residuals or some combination of these. It is unclear, especially for
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation
20 CFR 901.40 - Proof; variance; amendment of pleadings.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Vertical velocity variances and Reynold stresses at Brookhaven
Busch, Niels E.; Brown, R.M.; Frizzola, J.A.
1970-01-01
Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...... velocity component....
Common Persistence and Error-Correction Mode in Conditional Variance
LI Han-dong; ZHANG Shi-ying
2001-01-01
We firstly define the persistence and common persistence of vector GARCH process from the point of view of the integration, and then discuss the sufficient and necessary condition of the copersistence in variance. In the end of this paper, we give the properties and the error correction model of vector GARCH process under the condition of the co-persistence.
Variance Ranklets : Orientation-selective rank features for contrast modulations
Azzopardi, George; Smeraldi, Fabrizio
2009-01-01
We introduce a novel type of orientation–selective rank features that are sensitive to contrast modulations (second–order stimuli). Variance Ranklets are designed in close analogy with the standard Ranklets, but use the Siegel–Tukey statistics for dispersion instead of the Wilcoxon statistics. Their
A note on minimum-variance theory and beyond
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
Average local values and local variances in quantum mechanics
Muga, J G; Sala, P R
1998-01-01
Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.
Hedging with stock index futures: downside risk versus the variance
Brouwer, F.; Nat, van der M.
1995-01-01
In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to Fis
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Multivariate variance targeting in the BEKK-GARCH model
Pedersen, Rasmus S.; Rahbæk, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
A comparison between temporal and subband minimum variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.
2014-01-01
This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated...
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Testing for causality in variance using multivariate GARCH models
C.M. Hafner (Christian); H. Herwartz
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual
Perspective projection for variance pose face recognition from camera calibration
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic
Genetic Variance for Autism Screening Items in an Unselected Sample of Toddler-Age Twins
Stilp, Rebecca L. H.; Gernsbacher, Morton Ann; Schweigert, Emily K.; Arneson, Carrie L.; Goldsmith, H. Hill
2010-01-01
Objective: Twin and family studies of autistic traits and of cases diagnosed with autism suggest high heritability; however, the heritability of autistic traits in toddlers has not been investigated. Therefore, this study's goals were (1) to screen a statewide twin population using items similar to the six critical social and communication items…
Simultaneous optimal estimates of fixed effects and variance components in the mixed model
WU Mixia; WANG Songgui
2004-01-01
For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.
Luthria, Devanand L; Lin, Long-Ze; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M
2008-11-12
Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of two cultivars of broccoli, Majestic and Legacy, the first grown with four different levels of Se and the second grown organically and conventionally with two rates of irrigation. Chemical composition differences in the two cultivars and seven treatments produced patterns that were visually and statistically distinguishable using ANOVA-PCA. PCA loadings allowed identification of the molecular and fragment ions that provided the most significant chemical differences. A standardized profiling method for phenolic compounds showed that important discriminating ions were not phenolic compounds. The elution times of the discriminating ions and previous results suggest that they were common sugars and organic acids. ANOVA calculations of the positive and negative ionization MS fingerprints showed that 33% of the variance came from the cultivar, 59% from the growing treatment, and 8% from analytical uncertainty. Although the positive and negative ionization fingerprints differed significantly, there was no difference in the distribution of variance. High variance of individual masses with cultivars or growing treatment was correlated with high PCA loadings. The ANOVA data suggest that only variables with high variance for analytical uncertainty should be deleted. All other variables represent discriminating masses that allow separation of the samples with respect to cultivar and treatment.
García-Alonso, S.; Pérez-Pastor, R. M.; Archilla-Prat, V.; Rodríguez-Maroto, J.; Izquierdo-Díaz, M.; Rojas, E.; Sanz, D.
2015-12-01
A simple analytical method using low volumes of solvent for determining selected PAHs and NPAHs in PM samples is presented. The proposed extraction method was compared with pressurized fluid (PFE) and microwave (MC) extraction techniques and intermediate precision associated to analytical measurements were estimated. Extraction by agitation with 8 mL of dichloromethane yielded recoveries above 80% compared to those obtained from PFE extraction. Regarding intermediate precision results, values between 10-20% were reached showing increases of dispersion for compounds with high volatility and low levels of concentration. Within the framework of the INTA/CIEMAT research agreement for the PM characterization in gas turbine exhaust, the method was applied for analysis of aluminum foil substrates and quartz filters with mass loading ranged from 0.02 to 2 mg per sample.
Nielsen, Niles-Peter Vest; Smedsgaard, Jørn; Frisvad, Jens Christian
1999-01-01
A data analysis method is proposed for identification and for confirmation of classification schemes, based on single- or multiple-wavelength chromatographic profiles. The proposed method works directly on the chromatographic data without data reduction procedures such as peak area or retention...... index calculation, Chromatographic matrices from analysis of previously identified samples are used for generating a reference chromatogram for each class, and unidentified samples are compared with all reference chromatograms by calculating a resemblance measure for each reference. Once the method...... yielded over 90% agreement with accepted classifications. The method is highly accurate and may be used on all sorts of chromatographic profiles. Characteristic component analysis yielded results in good agreement with existing knowledge of characteristic components, but also succeeded in identifying new...
Litzow, Michael A.; Piatt, J.F.
2003-01-01
We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.
Jang, Y D; Lindemann, M D; Agudelo-Trujillo, J H; Escobar, C S; Kerr, B J; Inocencio, N; Cromwell, G L
2014-10-01
The intent of this study was to establish a fecal sampling procedure for the indicator method (IM) to provide digestibility values similar to those obtained by the total collection (TC) method. A total of 24 pigs (52.6 ± 1.5 kg) were fed 1 of 4 diets with a 2 × 2 factorial arrangement of virginiamycin and phytase (PHY) added to a corn-soybean meal diet with no inorganic P supplement. Pigs were housed in metabolism crates for a 5-d TC period after 7 d of adaptation. Immediately after the TC, a fecal collection period followed, using the IM by including 0.25% of Cr2O3 in the feed for 10 d. Fecal collection for the IM started the day after diets containing Cr2O3 were first fed, and continued for 9 consecutive days with a single grab sample per day. Similar portions of feces from d 5 to 9 were also composited into 4 samples to evaluate multi-day pooling combinations. Highly variable means and CV among samples for apparent total tract digestibility (ATTD) were observed at d 1 and 2 using the IM. The mean ATTD for DM, GE, and nutrients appeared to be stabilized by d 5 or 6 in all dietary treatments. The TC data seemed to have lower CV than the IM data for many components. Based on the linear broken-line analysis, fecal Cr concentration plateaued at d 3.75 (P < 0.001) after the first feeding of Cr. Mean ATTD values by the IM were lower than those by the TC method for DM (P < 0.05), GE (P < 0.01), P (P < 0.01), and Ca (P < 0.001). The PHY supplementation improved ATTD of P (P < 0.001) and Ca (P < 0.001) in both collection methods, whereas the PHY effect on ATTD of DM was observed only for the IM (P < 0.05). Differences related to PHY effect on ATTD were detected from d 4 to 9 in a single grab sample for P and DM but the ATTD of DM had inconsistent P-values by day. Fecal sampling after 4 d of initial feeding of marker always allowed detection of treatment effects on ATTD of P but not on ATTD of DM. Results indicated that the IM results in lower digestibility values than
Panea, I.; Drijkoningen, G.G.
2008-01-01
Coherent noise generated by surface waves or ground roll within a heterogeneous near surface is a major problem in land seismic data. Array forming based on single-sensor recordings might reduce such noise more robustly than conventional hardwired arrays. We use the minimum-variance
Minnett, P. J.; Liu, Y.; Kilpatrick, K. A.
2016-12-01
Sea-surface temperature (SST) measurements by satellites in the northern hemisphere high latitudes confront several difficulties. Year-round prevalent clouds, effects near ice edges, and the relative small difference between SST and low-level cloud temperatures lead to a significant loss of infrared observations regardless of the more frequent polar satellite overpasses. Recent research (Liu and Minnett, 2016) identified sampling issues in the Level 3 NASA MODIS SST products when 4km observations are aggregated into global grids at different time and space scales, particularly in the Arctic, where a binary decision cloud mask designed for global data is often overly conservative at high latitudes and results in many gaps and missing data. This under sampling of some Arctic regions results in a warm bias in Level 3 products, likely a result of warmer surface temperature, more distant from the ice edge, being identified more frequently as cloud free. Here we present an improved method for cloud detection in the Arctic using a majority vote from an ensemble of four classifiers trained based on an Alternative Decision Tree (ADT) algorithm (Freund and Mason 1999, Pfahringer et. al. 2001). This new cloud classifier increases sampling of clear pixel by 50% in several regions and generally produces cooler monthly average SST fields in the ice-free Arctic, while still retaining the same error characteristics at 1km resolution relative to in situ observations. SST time series of 12 years of MODIS (Aqua and Terra) and more recently VIIRS sensors are compared and the improvements in errors and uncertainties resulting from better cloud screening for Level 3 gridded products are assessed and summarized.
Variance Entropy: A Method for Characterizing Perceptual Awareness of Visual Stimulus
Meng Hu
2012-01-01
Full Text Available Entropy, as a complexity measure, is a fundamental concept for time series analysis. Among many methods, sample entropy (SampEn has emerged as a robust, powerful measure for quantifying complexity of time series due to its insensitivity to data length and its immunity to noise. Despite its popular use, SampEn is based on the standardized data where the variance is routinely discarded, which may nonetheless provide additional information for discriminant analysis. Here we designed a simple, yet efficient, complexity measure, namely variance entropy (VarEn, to integrate SampEn with variance to achieve effective discriminant analysis. We applied VarEn to analyze local field potential (LFP collected from visual cortex of macaque monkey while performing a generalized flash suppression task, in which a visual stimulus was dissociated from perceptual experience, to study neural complexity of perceptual awareness. We evaluated the performance of VarEn in comparison with SampEn on LFP, at both single and multiple scales, in discriminating different perceptual conditions. Our results showed that perceptual visibility could be differentiated by VarEn, with significantly better discriminative performance than SampEn. Our findings demonstrate that VarEn is a sensitive measure of perceptual visibility, and thus can be used to probe perceptual awareness of a stimulus.
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).
Luo, Gaoyong; Osypiw, David
2006-02-01
Transmitting digital images via mobile device is often subject to bandwidth which are incompatible with high data rates. Embedded coding for progressive image transmission has recently gained popularity in image compression community. However, current progressive wavelet-based image coders tend to send information on the lowest-frequency wavelet coefficients first. At very low bit rates, images compressed are therefore dominated by low frequency information, where high frequency components belonging to edges are lost leading to blurring the signal features. This paper presents a new image coder employing edge preservation based on local variance analysis to improve the visual appearance and recognizability of compressed images. The analysis and compression is performed by dividing an image into blocks. Fast lifting wavelet transform is developed with the advantages of being computationally efficient and boundary effects minimized by changing wavelet shape for handling filtering near the boundaries. A modified SPIHT algorithm with more bits used to encode the wavelet coefficients and transmitting fewer bits in the sorting pass for performance improvement, is implemented to reduce the correlation of the coefficients at scalable bit rates. Local variance estimation and edge strength measurement can effectively determine the best bit allocation for each block to preserve the local features by assigning more bits for blocks containing more edges with higher variance and edge strength. Experimental results demonstrate that the method performs well both visually and in terms of MSE and PSNR. The proposed image coder provides a potential solution with parallel computation and less memory requirements for mobile applications.
Łukasiewicz, Kinga; Sanak, Marek; Węgrzyn, Grzegorz
2016-05-01
Various insects contain maternally inherited endosymbiotic bacteria which can cause reproductive alterations, modulation of some physiological responses (like immunity, heat shock response, and oxidative stress response), and resistance to viral infections. In butterflies, Wolbachia sp. is the most frequent endosymbiont from this group, occurring in about 30 % of species tested to date. In this report, the presence of Wolbachia-specific DNA has been detected in apollo butterfly (Parnassius apollo). In the isolated population of this insect occurring in Pieniny National Park (Poland), malformed individuals with deformed or reduced wings appear with an exceptionally high frequency. Interestingly, while total DNA isolated from most (about 85 %) normal insects contained Wolbachia-specific sequences detected by PCR, such sequences were absent in a large fraction (70 %) of individuals with deformed wings and in all tested individuals with reduced wings. These results indicate for the first time the correlation between malformation of wings and the absence of Wolbachia sp. in insects. Although the lack of the endosymbiotic bacteria cannot be considered as the sole cause of the deformation or reduction of wings, one might suggest that Wolbachia sp. could play a protective role in the ontogenetic development of apollo butterfly.
Convergence of Recursive Identification for ARMAX Process with Increasing Variances
JIN Ya; LUO Guiming
2007-01-01
The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these ......This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...... to these two steps. Strong consistency is established under weak moment conditions, while sixth order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are indeed necessary....
Response variance in functional maps: neural darwinism revisited.
Hirokazu Takahashi
Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Explaining the Prevalence, Scaling and Variance of Urban Phenomena
Gomez-Lievano, Andres; Hausmann, Ricardo
2016-01-01
The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Automated Extraction of Archaeological Traces by a Modified Variance Analysis
Tiziana D'Orazio
2015-03-01
Full Text Available This paper considers the problem of detecting archaeological traces in digital aerial images by analyzing the pixel variance over regions around selected points. In order to decide if a point belongs to an archaeological trace or not, its surrounding regions are considered. The one-way ANalysis Of VAriance (ANOVA is applied several times to detect the differences among these regions; in particular the expected shape of the mark to be detected is used in each region. Furthermore, an effect size parameter is defined by comparing the statistics of these regions with the statistics of the entire population in order to measure how strongly the trace is appreciable. Experiments on synthetic and real images demonstrate the effectiveness of the proposed approach with respect to some state-of-the-art methodologies.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
The return of the variance: intraspecific variability in community ecology.
Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie
2012-04-01
Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.
Totir, Liviu R; Fernando, Rohan L; Dekkers, Jack C M; Fernández, Soledad A; Guldbrandtsen, Bernt
2004-01-01
Under additive inheritance, the Henderson mixed model equations (HMME) provide an efficient approach to obtaining genetic evaluations by marker assisted best linear unbiased prediction (MABLUP) given pedigree relationships, trait and marker data. For large pedigrees with many missing markers, however, it is not feasible to calculate the exact gametic variance covariance matrix required to construct HMME. The objective of this study was to investigate the consequences of using approximate gametic variance covariance matrices on response to selection by MABLUP. Two methods were used to generate approximate variance covariance matrices. The first method (Method A) completely discards the marker information for individuals with an unknown linkage phase between two flanking markers. The second method (Method B) makes use of the marker information at only the most polymorphic marker locus for individuals with an unknown linkage phase. Data sets were simulated with and without missing marker data for flanking markers with 2, 4, 6, 8 or 12 alleles. Several missing marker data patterns were considered. The genetic variability explained by marked quantitative trait loci (MQTL) was modeled with one or two MQTL of equal effect. Response to selection by MABLUP using Method A or Method B were compared with that obtained by MABLUP using the exact genetic variance covariance matrix, which was estimated using 15,000 samples from the conditional distribution of genotypic values given the observed marker data. For the simulated conditions, the superiority of MABLUP over BLUP based only on pedigree relationships and trait data varied between 0.1% and 13.5% for Method A, between 1.7% and 23.8% for Method B, and between 7.6% and 28.9% for the exact method. The relative performance of the methods under investigation was not affected by the number of MQTL in the model.
Lynn, Richard; Chen, Hsin-Yi; Chen, Yung-Hua
2011-07-01
Data for Raven's Progressive Matrices are reported for a sample of 6290 6- to 17-year-olds in Taiwan. The Taiwanese obtained a mean IQ of 109.5, in relation to a British mean of 100. There was no difference in mean scores of boys and girls at age 7 years. At age 10 years girls obtained significantly higher scores than boys, and at ages 13 and 16 years boys obtained significantly higher scores than girls. There was no sex difference in variance at age 7 years. At ages 10, 13 and 16 years variance was significantly greater in boys.
Analysis of Variance in the Modern Design of Experiments
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Seasonal variance in P system models for metapopulations
Daniela Besozzi; Paolo Cazzaniga; Dario Pescini; Giancarlo Mauri
2007-01-01
Metapopulations are ecological models describing the interactions and the behavior of populations living in fragmented habitats. In this paper, metapopulations are modelled by means of dynamical probabilistic P systems, where additional structural features have been defined (e. g., a weighted graph associated with the membrane structure and the reduction of maximal parallelism). In particular, we investigate the influence of stochastic and periodic resource feeding processes, owing to seasonal variance, on emergent metapopulation dynamics.
Estimating High-Frequency Based (Co-) Variances: A Unified Approach
Voev, Valeri; Nolte, Ingmar
We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... frequency derived in Bandi & Russell (2005a) and Bandi & Russell (2005b). For a realistic trading scenario, the efficiency gains resulting from our approach are in the range of 35% to 50%....
VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM
RANJU KANWAR; SAMEKSHA BHASKAR
2013-01-01
In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through th...
Recombining binomial tree for constant elasticity of variance process
Hi Jun Choe; Jeong Ho Chu; So Jeong Shin
2014-01-01
The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...
PARAMETER-ESTIMATION FOR ARMA MODELS WITH INFINITE VARIANCE INNOVATIONS
MIKOSCH, T; GADRICH, T; KLUPPELBERG, C; ADLER, RJ
We consider a standard ARMA process of the form phi(B)X(t) = B(B)Z(t), where the innovations Z(t) belong to the domain of attraction of a stable law, so that neither the Z(t) nor the X(t) have a finite variance. Our aim is to estimate the coefficients of phi and theta. Since maximum likelihood
LaRochelle, Jeffrey S; Dong, Ting; Durning, Steven J
2016-08-01
Evidence suggests that pre-clerkship courses in clinical skills and clinical reasoning positively impact student performance on the clerkship. Given the increasing emphasis on reducing diagnostic reasoning errors, it is very important to develop this critical area of medical education. An integrated approach between clinical skills and clinical reasoning courses may better predict struggling learners, and better allocate scarce resources to remediate these learners before the clerkship. Pre-clerkship and clerkship outcome measures from 514 medical students graduating between 2009 and 2011were analyzed in a multiple linear regression model. Learners with poor performances on integrated pre-clerkship outcome measures had a relative risk of 6.96 and 5.85 for poor performance on National Board of Medical Examiners (NBME) subject exams and clerkship performance, respectively, and explained 22 % of the variance in clerkship NBME subject exam scores and 20.2 % of the variance in clerkship grades. Pre-clerkship outcome measures from clinical skills and clinical reasoning courses explained a significant amount of clerkship performance beyond baseline academic ability. These courses provide valuable information regarding student abilities, and may serve as an early indicator for students requiring remediation. Integrating pre-clerkship outcome measures may be an important aspect of ensuring the validity of this information as the pre-clerkship curriculum becomes compressed, and may serve as the basis for identifying students in need of clinical skills remediation.
Relationship between Allan variances and Kalman Filter parameters
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Dynamic Programming Using Polar Variance for Image Segmentation.
Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J
2016-10-06
When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.
Measuring primordial non-gaussianity without cosmic variance
Seljak, Uros
2008-01-01
Non-gaussianity in the initial conditions of the universe is one of the most powerful mechanisms to discriminate among the competing theories of the early universe. Measurements using bispectrum of cosmic microwave background anisotropies are limited by the cosmic variance, i.e. available number of modes. Recent work has emphasized the possibility to probe non-gaussianity of local type using the scale dependence of large scale bias from highly biased tracers of large scale structure. However, this power spectrum method is also limited by cosmic variance, finite number of structures on the largest scales, and by the partial degeneracy with other cosmological parameters that can mimic the same effect. Here we propose an alternative method that solves both of these problems. It is based on the idea that on large scales halos are biased, but not stochastic, tracers of dark matter: by correlating a highly biased tracer of large scale structure against an unbiased tracer one eliminates the cosmic variance error, wh...
Kelly, David; Budd, Kenneth; Lefebvre, Daniel D
2006-01-01
The biotransformation of Hg(II) in pH-controlled and aerated algal cultures was investigated. Previous researchers have observed losses in Hg detection in vitro with the addition of cysteine under acid reduction conditions in the presence of SnCl2. They proposed that this was the effect of Hg-thiol complexing. The present study found that cysteine-Hg, protein and nonprotein thiol chelates, and nucleoside chelates of Hg were all fully detectable under acid reduction conditions without previous digestion. Furthermore, organic (R-Hg) mercury compounds could not be detected under either the acid or alkaline reduction conditions, and only beta-HgS was detected under alkaline and not under acid SnCl2 reduction conditions. The blue-green alga Limnothrix planctonica biotransformed the bulk of Hg(II) applied as HgCl2 into a form with the analytical properties of beta-HgS. Similar results were obtained for the eukaryotic alga Selenastrum minutum. No evidence for the synthesis of organomercurials such as CH3Hg+ was obtained from analysis of either airstream or biomass samples under the aerobic conditions of the study. An analytical procedure that involved both acid and alkaline reduction was developed. It provides the first selective method for the determination of beta-HgS in biological samples. Under aerobic conditions, Hg(II) is biotransformed mainly into beta-HgS (meta-cinnabar), and this occurs in both prokaryotic and eukaryotic algae. This has important implications with respect to identification of mercury species and cycling in aquatic habitats.