WorldWideScience

Sample records for variance maximum precision

  1. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  2. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  3. Improving precision in gel electrophoresis by stepwisely decreasing variance components.

    Science.gov (United States)

    Schröder, Simone; Brandmüller, Asita; Deng, Xi; Ahmed, Aftab; Wätzig, Hermann

    2009-10-15

    Many methods have been developed in order to increase selectivity and sensitivity in proteome research. However, gel electrophoresis (GE) which is one of the major techniques in this area, is still known for its often unsatisfactory precision. Percental relative standard deviations (RSD%) up to 60% have been reported. In this case the improvement of precision and sensitivity is absolutely essential, particularly for the quality control of biopharmaceuticals. Our work reflects the remarkable and completely irregular changes of the background signal from gel to gel. This irregularity was identified as one of the governing error sources. These background changes can be strongly reduced by using a signal detection in the near-infrared (NIR) range. This particular detection method provides the most sensitive approach for conventional CCB (Colloidal Coomassie Blue) stained gels, which is reflected in a total error of just 5% (RSD%). In order to further investigate variance components in GE, an experimental Plackett-Burman screening design was performed. The influence of seven potential factors on the precision was investigated using 10 proteins with different properties analyzed by NIR detection. The results emphasized the individuality of the proteins. Completely different factors were identified to be significant for each protein. However, out of seven investigated parameters, just four showed a significant effect on some proteins, namely the parameters of: destaining time, staining temperature, changes of detergent additives (SDS and LDS) in the sample buffer, and the age of the gels. As a result, precision can only be improved individually for each protein or protein classes. Further understanding of the unique properties of proteins should enable us to improve the precision in gel electrophoresis.

  4. Precise charge density studies by maximum entropy method

    CERN Document Server

    Takata, M

    2003-01-01

    For the production research and development of nanomaterials, their structural information is indispensable. Recently, a sophisticated analytical method, which is based on information theory, the Maximum Entropy Method (MEM) using synchrotron radiation powder data, has been successfully applied to determine precise charge densities of metallofullerenes and nanochannel microporous compounds. The results revealed various endohedral natures of metallofullerenes and one-dimensional array formation of adsorbed gas molecules in nanochannel microporous compounds. The concept of MEM analysis was also described briefly. (author)

  5. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  6. Statistical precision of delayed-neutron nondestructive assay techniques

    International Nuclear Information System (INIS)

    Bayne, C.K.; McNeany, S.R.

    1979-02-01

    A theoretical analysis of the statistical precision of delayed-neutron nondestructive assay instruments is presented. Such instruments measure the fissile content of nuclear fuel samples by neutron irradiation and delayed-neutron detection. The precision of these techniques is limited by the statistical nature of the nuclear decay process, but the precision can be optimized by proper selection of system operating parameters. Our method is a three-part analysis. We first present differential--difference equations describing the fundamental physics of the measurements. We then derive and present complete analytical solutions to these equations. Final equations governing the expected number and variance of delayed-neutron counts were computer programmed to calculate the relative statistical precision of specific system operating parameters. Our results show that Poisson statistics do not govern the number of counts accumulated in multiple irradiation-count cycles and that, in general, maximum count precision does not correspond with maximum count as first expected. Covariance between the counts of individual cycles must be considered in determining the optimum number of irradiation-count cycles and the optimum irradiation-to-count time ratio. For the assay system in use at ORNL, covariance effects are small, but for systems with short irradiation-to-count transition times, covariance effects force the optimum number of irradiation-count cycles to be half those giving maximum count. We conclude that the equations governing the expected value and variance of delayed-neutron counts have been derived in closed form. These have been computerized and can be used to select optimum operating parameters for delayed-neutron assay devices

  7. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  8. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    Science.gov (United States)

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  9. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  10. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  11. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  12. A High-precision Motion Compensation Method for SAR Based on Image Intensity Optimization

    Directory of Open Access Journals (Sweden)

    Hu Ke-bin

    2015-02-01

    Full Text Available Owing to the platform instability and precision limitations of motion sensors, motion errors negatively affect the quality of synthetic aperture radar (SAR images. The autofocus Back Projection (BP algorithm based on the optimization of image sharpness compensates for motion errors through phase error estimation. This method can attain relatively good performance, while assuming the same phase error for all pixels, i.e., it ignores the spatial variance of motion errors. To overcome this drawback, a high-precision motion error compensation method is presented in this study. In the proposed method, the Antenna Phase Centers (APC are estimated via optimization using the criterion of maximum image intensity. Then, the estimated APCs are applied for BP imaging. Because the APC estimation equals the range history estimation for each pixel, high-precision phase compensation for every pixel can be achieved. Point-target simulations and processing of experimental data validate the effectiveness of the proposed method.

  13. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  14. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  15. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  16. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  17. Covariate Imbalance and Precision in Measuring Treatment Effects

    Science.gov (United States)

    Liu, Xiaofeng Steven

    2011-01-01

    Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…

  18. Predicting the Maximum Dynamic Strength in Bench Press: The High Precision of the Bar Velocity Approach.

    Science.gov (United States)

    Loturco, Irineu; Kobal, Ronaldo; Moraes, José E; Kitamura, Katia; Cal Abad, César C; Pereira, Lucas A; Nakamura, Fábio Y

    2017-04-01

    Loturco, I, Kobal, R, Moraes, JE, Kitamura, K, Cal Abad, CC, Pereira, LA, and Nakamura, FY. Predicting the maximum dynamic strength in bench press: the high precision of the bar velocity approach. J Strength Cond Res 31(4): 1127-1131, 2017-The aim of this study was to determine the force-velocity relationship and test the possibility of determining the 1 repetition maximum (1RM) in "free weight" and Smith machine bench presses. Thirty-six male top-level athletes from 3 different sports were submitted to a standardized 1RM bench press assessment (free weight or Smith machine, in randomized order), following standard procedures encompassing lifts performed at 40-100% of 1RM. The mean propulsive velocity (MPV) was measured in all attempts. A linear regression was performed to establish the relationships between bar velocities and 1RM percentages. The actual and predicted 1RM for each exercise were compared using a paired t-test. Although the Smith machine 1RM was higher (10% difference) than the free weight 1RM, in both cases the actual and predicted values did not differ. In addition, the linear relationship between MPV and percentage of 1RM (coefficient of determination ≥95%) allow determination of training intensity based on the bar velocity. The linear relationships between the MPVs and the relative percentages of 1RM throughout the entire range of loads enable coaches to use the MPV to accurately monitor their athletes on a daily basis and accurately determine their actual 1RM without the need to perform standard maximum dynamic strength assessments.

  19. Combining within and between instrument information to estimate precision

    International Nuclear Information System (INIS)

    Jost, J.W.; Devary, J.L.; Ward, J.E.

    1980-01-01

    When two instruments, both having replicated measurements, are used to measure the same set of items, between instrument information may be used to augment the within instrument precision estimate. A method is presented which combines the within and between instrument information to obtain an unbiased and minimum variance estimate of instrument precision. The method does not assume the instruments have equal precision

  20. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    Science.gov (United States)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  1. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  2. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  3. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  4. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  5. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  6. Variance components estimation for farrowing traits of three purebred pigs in Korea

    Directory of Open Access Journals (Sweden)

    Bryan Irvine Lopez

    2017-09-01

    Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.

  7. Spike timing precision of neuronal circuits.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2018-04-17

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  8. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  9. STUDY LINKS SOLVING THE MAXIMUM TASK OF LINEAR CONVOLUTION «EXPECTED RETURNS-VARIANCE» AND THE MINIMUM VARIANCE WITH RESTRICTIONS ON RETURNS

    Directory of Open Access Journals (Sweden)

    Maria S. Prokhorova

    2014-01-01

    Full Text Available The article deals with a study of problemsof finding the optimal portfolio securitiesusing convolutions expectation of portfolioreturns and portfolio variance. Value of thecoefficient of risk, in which the problem ofmaximizing the variance - limited yieldis equivalent to maximizing a linear convolution of criteria for «expected returns-variance» is obtained. An automated method for finding the optimal portfolio, onthe basis of which the results of the studydemonstrated is proposed.

  10. Application of the FW-CADIS variance reduction method to calculate a precise N-flux distribution for the FRJ-2 research reactor

    International Nuclear Information System (INIS)

    Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J.

    2014-01-01

    For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.

  11. Application of the FW-CADIS variance reduction method to calculate a precise N-flux distribution for the FRJ-2 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Engineering and Technology

    2014-11-15

    For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.

  12. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  13. High Efficiency Computation of the Variances of Structural Evolutionary Random Responses

    Directory of Open Access Journals (Sweden)

    J.H. Lin

    2000-01-01

    Full Text Available For structures subjected to stationary or evolutionary white/colored random noise, their various response variances satisfy algebraic or differential Lyapunov equations. The solution of these Lyapunov equations used to be very difficult. A precise integration method is proposed in the present paper, which solves such Lyapunov equations accurately and very efficiently.

  14. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  15. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    Science.gov (United States)

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  16. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  17. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  18. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  19. Investigation of a precise static leach test for the testing of simulated nuclear waste materials

    International Nuclear Information System (INIS)

    Kingston, H.M.; Cronin, D.J.; Epstein, M.S.

    1984-01-01

    The precision of the nuclear waste static leach test was evaluated using controlled experimental conditions and homogeneous glass materials. The majority of the leachate components were subjected to simultaneous multielement DCP analysis. The overall precision of the static leach test is determined by the summation of random effects caused by: variance in the experimental conditions of the leaching procedure; inhomogeneity of the material to be leached; and variance of the analytical techniques used to determine elemental concentrations in the leachate. In this study, strict control of key experimental parameters was employed to reduce the first source of variance. In addition, special attention to the preparation of glass samples to be tested assured a high degree of homogeneity. Described here are the details of the reduction of these two sources of variance to a point where the overall test precision is limited by that of the analysis step. Of the elements determined - B, Ba, Ca, Cs, Mo, Na, Si, Sr, and Zn - only Ca and Zn exhibited replicate imprecision significantly greater than that observed in the analysis of the leachate solutions. The imprecision in the Zn was partially attributed to the non-reproducible adsorption onto the leach vessel walls during the 28 day test period. None of the other elements exhibited this behavior

  20. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  1. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  2. The variance of dispersion measure of high-redshift transient objects as a probe of ionized bubble size during reionization

    Science.gov (United States)

    Yoshiura, Shintaro; Takahashi, Keitaro

    2018-01-01

    The dispersion measure (DM) of high-redshift (z ≳ 6) transient objects such as fast radio bursts can be a powerful tool to probe the intergalactic medium during the Epoch of Reionization. In this paper, we study the variance of the DMs of objects with the same redshift as a potential probe of the size distribution of ionized bubbles. We calculate the DM variance with a simple model with randomly distributed spherical bubbles. It is found that the DM variance reflects the characteristics of the probability distribution of the bubble size. We find that the variance can be measured precisely enough to obtain the information on the typical size with a few hundred sources at a single redshift.

  3. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  4. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  5. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  6. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  7. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  8. Neuronal oscillations enhance stimulus discrimination by ensuring action potential precision

    DEFF Research Database (Denmark)

    Schaefer, Andreas T; Angelo, Kamilla; Spors, Hartwig

    2006-01-01

    generated membrane potential oscillations dramatically improve action potential (AP) precision by removing the membrane potential variance associated with jitter-accumulating trains of APs. This increased AP precision occurred irrespective of cell type and--at oscillation frequencies ranging from 3 to 65 Hz......Although oscillations in membrane potential are a prominent feature of sensory, motor, and cognitive function, their precise role in signal processing remains elusive. Here we show, using a combination of in vivo, in vitro, and theoretical approaches, that both synaptically and intrinsically......, membrane potential oscillations dramatically enhance the discriminatory capabilities of individual neurons and networks of cells and provide one attractive explanation for their abundance in neurophysiological systems....

  9. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  10. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  11. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  12. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  13. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, Catherine, E-mail: C.Donnelly@hw.ac.uk [Heriot-Watt University, Department of Actuarial Mathematics and Statistics (United Kingdom)

    2011-10-15

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  14. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    International Nuclear Information System (INIS)

    Donnelly, Catherine

    2011-01-01

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  15. Reduction of variance in spectral estimates for correction of ultrasonic aberration.

    Science.gov (United States)

    Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C

    2006-01-01

    A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.

  16. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  17. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  18. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  19. Change-Point and Trend Analysis on Annual Maximum Discharge in Continental United States

    Science.gov (United States)

    Serinaldi, F.; Villarini, G.; Smith, J. A.; Krajewski, W. F.

    2008-12-01

    Annual maximum discharge records from 36 stations representing different hydro-climatic regimes in the continental United States with at least 100 years of records are used to investigate the presence of temporal trends and abrupt changes in mean and variance. Change point analysis is performed by means of two non- parametric (Pettitt and CUSUM), one semi-parametric (Guan), and two parametric (Rodionov and Bayesian Change Point) tests. Two non-parametric (Mann-Kendall and Spearman) and one parametric (Pearson) tests are applied to detect the presence of temporal trends. Generalized Additive Model for Location Scale and Shape (GAMLSS) models are also used to parametrically model the streamflow data exploiting their flexibility to account for changes and temporal trends in the parameters of distribution functions. Additionally, serial correlation is assessed in advance by computing the autocorrelation function (ACF), and the Hurst parameter is estimated using two estimators (aggregated variance and differenced variance methods) to investigate the presence of long range dependence. The results of this study indicate lack of long range dependence in the maximum streamflow series. At some stations the authors found a statistically significant change point in the mean and/or variance, while in general they detected no statistically significant temporal trends.

  20. On estimation of the noise variance in high-dimensional linear models

    OpenAIRE

    Golubev, Yuri; Krymova, Ekaterina

    2017-01-01

    We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).

  1. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling

    Science.gov (United States)

    Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng

    2016-01-01

    Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974

  2. Effect of inversion time on the precision of myocardial late gadolinium enhancement quantification evaluated with synthetic inversion recovery MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Varga-Szemes, Akos; Schoepf, U.J.; De Cecco, Carlo N.; Fuller, Stephen R.; Suranyi, Pal [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Geest, Rob J. van der [Leiden University Medical Center, Department of Radiology, Leiden (Netherlands); Spottiswoode, Bruce S. [Siemens Medical Solutions, Chicago, IL (United States); Muscogiuri, Giuseppe [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Bambino Gesu Children' s Hospital IRCCS, Department of Imaging, Rome (Italy); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Maurovich-Horvat, Pal; Merkely, Bela [Semmelweis University, MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Litwin, Sheldon E. [Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Vliegenthart, Rozemarijn [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Groningen, University Medical Center Groningen, Center for Medical Imaging-North East Netherlands, Department of Radiology, Groningen (Netherlands)

    2017-08-15

    To evaluate the influence of inversion time (TI) on the precision of myocardial late gadolinium enhancement (LGE) quantification using synthetic inversion recovery (IR) imaging in patients with myocardial infarction (MI). Fifty-three patients with suspected prior MI underwent 1.5-T cardiac MRI with conventional magnitude (MagIR) and phase-sensitive IR (PSIR) LGE imaging and T1 mapping at 15 min post-contrast. T1-based synthetic MagIR and PSIR images were calculated with a TI ranging from -100 to +150 ms at 5-ms intervals relative to the optimal TI (TI{sub 0}). LGE was quantified using a five standard deviation (5SD) and full width at half-maximum (FWHM) thresholds. Measurements were compared using one-way analysis of variance. The MagIR{sub sy} technique provided precise assessment of LGE area at TIs ≥ TI{sub 0}, while precision was decreased below TI{sub 0}. The LGE area showed significant differences at ≤ -25 ms compared to TI{sub 0} using 5SD (P < 0.001) and at ≤ -65 ms using the FWHM approach (P < 0.001). LGE measurements did not show significant difference over the analysed TI range in the PSIR{sub sy} images using either of the quantification methods. T1 map-based PSIR{sub sy} images provide precise quantification of MI independent of TI at the investigated time point post-contrast. MagIR{sub sy}-based MI quantification is precise at TI{sub 0} and at longer TIs while showing decreased precision at TI values below TI{sub 0}. (orig.)

  3. Ulnar variance: its relationship to ulnar foveal morphology and forearm kinematics.

    Science.gov (United States)

    Kataoka, Toshiyuki; Moritomo, Hisao; Omokawa, Shohei; Iida, Akio; Murase, Tsuyoshi; Sugamoto, Kazuomi

    2012-04-01

    It is unclear how individual differences in the anatomy of the distal ulna affect kinematics and pathology of the distal radioulnar joint. This study evaluated how ulnar variance relates to ulnar foveal morphology and the pronosupination axis of the forearm. We performed 3-dimensional computed tomography studies in vivo on 28 forearms in maximum supination and pronation to determine the anatomical center of the ulnar distal pole and the forearm pronosupination axis. We calculated the forearm pronosupination axis using a markerless bone registration technique, which determined the pronosupination center as the point where the axis emerges on the distal ulnar surface. We measured the depth of the anatomical center and classified it into 2 types: concave, with a depth of 0.8 mm or more, and flat, with a depth less than 0.8 mm. We examined whether ulnar variance correlated with foveal type and the distance between anatomical and pronosupination centers. A total of 18 cases had a concave-type fovea surrounded by the C-shaped articular facet of the distal pole, and 10 had a flat-type fovea with a flat surface without evident central depression. Ulnar variance of the flat type was 3.5 ± 1.2 mm, which was significantly greater than the 1.2 ± 1.1 mm of the concave type. Ulnar variance positively correlated with distance between the anatomical and pronosupination centers. Flat-type ulnar heads have a significantly greater ulnar variance than concave types. The pronosupination axis passes through the ulnar head more medially and farther from the anatomical center with increasing ulnar variance. This study suggests that ulnar variance is related in part to foveal morphology and pronosupination axis. This information provides a starting point for future studies investigating how foveal morphology relates to distal ulnar problems. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  4. Estimadores de componentes de variância em delineamento de blocos aumentados com tratamentos novos de uma ou mais populações Estimators of variance components in the augmented block design with new treatments from one or more populations

    Directory of Open Access Journals (Sweden)

    João Batista Duarte

    2001-09-01

    Full Text Available O objetivo do trabalho foi comparar, por meio de simulação, as estimativas de componentes de variância produzidas pelos métodos ANOVA (análise da variância, ML (máxima verossimilhança, REML (máxima verossimilhança restrita e MIVQUE(0 (estimador quadrático não viesado de variância mínima, no delineamento de blocos aumentados com tratamentos adicionais (progênies de uma ou mais procedências (cruzamentos. Os resultados indicaram superioridade relativa do método MIVQUE(0. O método ANOVA, embora não tendencioso, apresentou as estimativas de menor precisão. Os métodos de máxima verossimilhança, sobretudo ML, tenderam a subestimar a variância do erro experimental ( e a superestimar as variâncias genotípicas (, em especial nos experimentos de menor tamanho (n/>0,5. Contudo, o método produziu as piores estimativas de variâncias genotípicas quando as progênies vieram de diferentes cruzamentos e os experimentos foram pequenos.This work compares by simulation estimates of variance components produced by the ANOVA (analysis of variance, ML (maximum likelihood, REML (restricted maximum likelihood, and MIVQUE(0 (minimum variance quadratic unbiased estimator methods for augmented block design with additional treatments (progenies stemming from one or more origins (crosses. Results showed the superiority of the MIVQUE(0 estimation. The ANOVA method, although unbiased, showed estimates with lower precision. The ML and REML methods produced downwards biased estimates for error variance (, and upwards biased estimates for genotypic variances (, particularly the ML method. Biases for the REML estimation became negligible when progenies were derived from a single cross, and experiments were of larger size with ratios />0.5. This method, however, provided the worst estimates for genotypic variances when progenies were derived from several crosses and the experiments were of small size (n<120 observations.

  5. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  7. precision deburring using NC and robot equipment. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Gillespie, L.K.

    1980-05-01

    Deburring precision miniature components is often time consuming and inconsistent. Although robots are available for deburring parts, they are not precise enough for precision miniature parts. Numerical control (NC) machining can provide edge break consistencies to meet requirements such as 76.2-..mu..m maximum edge break (chamfer). Although NC machining has a number of technical limitations which prohibits its use on many geometries, it can be an effective approach to features that are particularly difficult to deburr.

  8. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  9. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  10. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  11. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  12. How to become twice more precise in detecting neuropsychological impairments

    Directory of Open Access Journals (Sweden)

    Davide Crepaldi

    2014-04-01

    The data indicate that using a better statistical model to calculate expected scores is highly beneficial in terms of amount of explained variance, that is, we can gauge the true effects of subject (e.g., age and item variables (e.g., frequency much more precisely, thus enjoying higher–quality corrected scores. Critically, this reflects substantially into how (simulated and actual patients are classified.

  13. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  14. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  15. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  16. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  17. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    Science.gov (United States)

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  18. Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products.

    Science.gov (United States)

    Shorten, P R; Membré, J-M; Pleasants, A B; Kubaczka, M; Soboleva, T K

    2004-06-01

    The objective of this paper was to estimate and partition the variability in the microbial growth model parameters describing the growth of Erwinia carotovora on pasteurised and non-pasteurised vegetable juice from laboratory experiments performed under different temperature-varying conditions. We partitioned the model parameter variance and covariance components into effects due to temperature profile and replicate using a maximum likelihood technique. Temperature profile and replicate were treated as random effects and the food substrate was treated as a fixed effect. The replicate variance component was small indicating a high level of control in this experiment. Our analysis of the combined E. carotovora growth data sets used the Baranyi primary microbial growth model along with the Ratkowsky secondary growth model. The variability in the microbial growth parameters estimated from these microbial growth experiments is essential for predicting the mean and variance through time of the E. carotovora population size in a product supply chain and is the basis for microbiological risk assessment and food product shelf-life estimation. The variance partitioning made here also assists in the management of optimal product distribution networks by identifying elements of the supply chain contributing most to product variability. Copyright 2003 Elsevier B.V.

  19. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  20. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  1. Dependence of US hurricane economic loss on maximum wind speed and storm size

    International Nuclear Information System (INIS)

    Zhai, Alice R; Jiang, Jonathan H

    2014-01-01

    Many empirical hurricane economic loss models consider only wind speed and neglect storm size. These models may be inadequate in accurately predicting the losses of super-sized storms, such as Hurricane Sandy in 2012. In this study, we examined the dependences of normalized US hurricane loss on both wind speed and storm size for 73 tropical cyclones that made landfall in the US from 1988 through 2012. A multi-variate least squares regression is used to construct a hurricane loss model using both wind speed and size as predictors. Using maximum wind speed and size together captures more variance of losses than using wind speed or size alone. It is found that normalized hurricane loss (L) approximately follows a power law relation with maximum wind speed (V max ) and size (R), L = 10 c V max a R b , with c determining an overall scaling factor and the exponents a and b generally ranging between 4–12 and 2–4 respectively. Both a and b tend to increase with stronger wind speed. Hurricane Sandy’s size was about three times of the average size of all hurricanes analyzed. Based on the bi-variate regression model that explains the most variance for hurricanes, Hurricane Sandy’s loss would be approximately 20 times smaller if its size were of the average size with maximum wind speed unchanged. It is important to revise conventional empirical hurricane loss models that are only dependent on maximum wind speed to include both maximum wind speed and size as predictors. (letters)

  2. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  3. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  4. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  5. Microbiological Quality of Panicum maximum Grass Silage with Addition of Lactobacillus sp. as Starter

    Science.gov (United States)

    Sumarsih, S.; Sulistiyanto, B.; Utama, C. S.

    2018-02-01

    The aim of the research was to evaluate microbiological quality of Panicum maximum grass silage with addition Lactobacillus sp as starter. The completely randomized design was been used on this research with 4 treaments and 3 replications. The treatments were P0 ( Panicum maximum grass silage without addition Lactobacillus sp ), P1 ( Panicum maximum grass silage with 2% addition Lactobacillus sp), P2 (Panicum maximum grass silage with 4% addition Lactobacillus sp) and P3 (Panicum maximum grass silage with 6% addition Lactobacillus sp).The parameters were microbial populations of Panicum maximum grass silage (total lactic acid bacteria, total bacteria, total fungi, and Coliform bacteria. The data obtained were analyzed variance (ANOVA) and further tests performed Duncan’s Multiple Areas. The population of lactic acid bacteria was higher (PMicrobiological quality of Panicum maximum grass silage with addition Lactobacillus sp was better than no addition Lactobacillus sp.

  6. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    Science.gov (United States)

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests

  7. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  8. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    Science.gov (United States)

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  9. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  10. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  11. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    Science.gov (United States)

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  12. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  14. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  15. Precision of dual energy X-ray absorptiometry for body composition measurements in cats

    International Nuclear Information System (INIS)

    Borges, N.C.; Vasconcellos, R.S.; Canola, J.C.; Carciofi, A.C.; Pereira, G.T.; Paula, F.J.A.

    2008-01-01

    A short-term precision error of the individual subject and the DEXA technique, such as the effect of the repositioning of the cat on the examination table, were established. Four neutered adult cats (BW=4342 g) and three females (BW=3459 g) were submitted to five repeated scans with and without repositioning between them. Precision was estimated from the mean of the five measurements and expressed by the individual coefficient of variation (CV). The precision error of the technique was estimated by the variance of scan pool (n=35) and expressed in CV for the technique (CVt). The degrees of freedom and confidence intervals were determined to avoid underestimation of precision errors. Bone mineral content (BMC), lean mass (LM), and fat mass (FM) averages were higher (P<0.05) when animals were repositioned. The CVt was significantly higher (P<0.05) for bone mineral density (BMD), LM, and FM when the animals were repositioned. For short-term precision measurements, the repositioning of the animal was important to establish the precision of the technique. The dual energy xray absorptiometry method provided precision for body composition measurements in adult cats. (author)

  16. Study of forecasting maximum demand of electric power

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, B.C.; Hwang, Y.J. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    As far as the past performances of power supply and demand in Korea is concerned, one of the striking phenomena is that there have been repeated periodic surpluses and shortages of power generation facilities. Precise assumption and prediction of power demands is the basic work in establishing a supply plan and carrying out the right policy since facilities investment of the power generation industry requires a tremendous amount of capital and a long construction period. The purpose of this study is to study a model for the inference and prediction of a more precise maximum demand under these backgrounds. The non-parametric model considered in this study, paying attention to meteorological factors such as temperature and humidity, does not have a simple proportionate relationship with the maximum power demand, but affects it through mutual complicated nonlinear interaction. I used the non-parametric inference technique by introducing meteorological effects without importing any literal assumption on the interaction of temperature and humidity preliminarily. According to the analysis result, it is found that the non-parametric model that introduces the number of tropical nights which shows the continuity of the meteorological effect has better prediction power than the linear model. The non- parametric model that considers both the number of tropical nights and the number of cooling days at the same time is a model for predicting maximum demand. 7 refs., 6 figs., 9 tabs.

  17. Optimal Frequency Ranges for Sub-Microsecond Precision Pulsar Timing

    Science.gov (United States)

    Lam, Michael Timothy; McLaughlin, Maura; Cordes, James; Chatterjee, Shami; Lazio, Joseph

    2018-01-01

    Precision pulsar timing requires optimization against measurement errors and astrophysical variance from the neutron stars themselves and the interstellar medium. We investigate optimization of arrival time precision as a function of radio frequency and bandwidth. We find that increases in bandwidth that reduce the contribution from receiver noise are countered by the strong chromatic dependence of interstellar effects and intrinsic pulse-profile evolution. The resulting optimal frequency range is therefore telescope and pulsar dependent. We demonstrate the results for five pulsars included in current pulsar timing arrays and determine that they are not optimally observed at current center frequencies. We also find that arrival-time precision can be improved by increases in total bandwidth. Wideband receivers centered at high frequencies can reduce required overall integration times and provide significant improvements in arrival time uncertainty by a factor of $\\sim$$\\sqrt{2}$ in most cases, assuming a fixed integration time. We also discuss how timing programs can be extended to pulsars with larger dispersion measures through the use of higher-frequency observations.

  18. Comparison of the precision of three commonly used GPS models

    Directory of Open Access Journals (Sweden)

    E Chavoshi

    2016-04-01

    View software (v3.3 was used (Fig.3. The real location of stations and registered by each receiver position has been determined. Information recorded in Map Source software, including all longitude and latitudes registered for each station and receiver were transferred to Excel Software (2007. Table 2 shows the mean precision values recorded in each weather conditions. The results obtained by equation 1 (the mean error at each station showed that the GPS MAP 78s model has the lowest error of 91 cm, VISTA eTrex model has a maximum error of 4.7 meters and MAP 60 csx model has mean error of about 2.64 meters. The analysis of variance of models and weather conditions and the time of day with the interactions between factors have been shown in Table 3. Results showed that there is significant difference (0.01

    precision of different receivers models. Investigating of the interactions between the receiver models and the weather conditions showed no significant effect of them and the interaction between the receiver models and the measured time difference is not significant. These results showed that weather conditions and time of day is the same effect on positioning precision of GPS receivers used in this research. These results were consistent with the study of Jose and colleagues (Jose et al., 2006. The mean Comparison test of LSD (at 5% level for the accuracy and precision of the models showed the significant difference for all models (Table 4. Figures 4 and 5 respectively show the accuracy and precision of three models of GPS receiver at different times of day and different weather conditions. Conclusions: Effect of daylight hours on positioning precision was very low; also the effect of different weather conditions may reduce the accuracy of GPS positioning to size of few centimeters. Overall, the results indicated that between the three factors include the models, the effects

  19. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  20. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  1. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  2. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  3. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  4. High-precision performance testing of the LHC power converters

    CERN Document Server

    Bastos, M; Dreesen, P; Fernqvist, G; Fournier, O; Hudson, G

    2007-01-01

    The magnet power converters for LHC were procured in three parts, power part, current transducers and control electronics, to enable a maximum of industrial participation in the manufacturing and still guarantee the very high precision (a few parts in 10-6) required by LHC. One consequence of this approach was several stages of system tests: factory reception tests, CERN reception tests, integration tests , short-circuit tests and commissioning on the final load in the LHC tunnel. The majority of the power converters for LHC have now been delivered, integrated into complete converter and high-precision performance testing is well advanced. This paper presents the techniques used for high-precision testing and the results obtained.

  5. Finite Precision Logistic Map between Computational Efficiency and Accuracy with Encryption Applications

    Directory of Open Access Journals (Sweden)

    Wafaa S. Sayed

    2017-01-01

    Full Text Available Chaotic systems appear in many applications such as pseudo-random number generation, text encryption, and secure image transfer. Numerical solutions of these systems using digital software or hardware inevitably deviate from the expected analytical solutions. Chaotic orbits produced using finite precision systems do not exhibit the infinite period expected under the assumptions of infinite simulation time and precision. In this paper, digital implementation of the generalized logistic map with signed parameter is considered. We present a fixed-point hardware realization of a Pseudo-Random Number Generator using the logistic map that experiences a trade-off between computational efficiency and accuracy. Several introduced factors such as the used precision, the order of execution of the operations, parameter, and initial point values affect the properties of the finite precision map. For positive and negative parameter cases, the studied properties include bifurcation points, output range, maximum Lyapunov exponent, and period length. The performance of the finite precision logistic map is compared in the two cases. A basic stream cipher system is realized to evaluate the system performance for encryption applications for different bus sizes regarding the encryption key size, hardware requirements, maximum clock frequency, NIST and correlation, histogram, entropy, and Mean Absolute Error analyses of encrypted images.

  6. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  7. A Study on the Estimation of the Scale Factor for Precise Point Positioning

    Science.gov (United States)

    Erdogan, Bahattin; Kayacik, Orhan

    2017-04-01

    Precise Point Positioning (PPP) technique is one of the most important subject in Geomatic Engineering. PPP technique needs only one GNSS receiver and users have preferred it instead of traditional relative positioning technique for several applications. Scientific software has been used for PPP solutions and the software may underestimate the formal errors of the estimated coordinates. The formal errors have major effects on statistical interpretation. Variance-Covariance (VCV) matrix derived from GNSS processing software plays important role for deformation analysis and scientists sometimes need to scale VCV matrix. In this study, 10 continuously operating reference stations have been considered for 11 days dated 2014. All points have been analyzed by Gipsy-OASIS v6.4 scientific software. The solutions were derived for different session durations as 2, 4, 6, 8, 12 and 24 hours to obtain repeatability of the coordinates and analyses were carried out in order to estimate scale factor for Gipsy-OASIS v6.4 PPP results. According to the first results scale factors slightly increase depending on the raises in respect of session duration. Keywords: Precise Point Positioning, Gipsy-OASIS v6.4, Variance-Covariance Matrix, Scale Factor

  8. An assessment of the precision and confidence of aquatic eddy correlation measurements

    DEFF Research Database (Denmark)

    Donis, Daphne; Holtappels, Moritz; Noss, Christian

    2015-01-01

    facility with well-constrained hydrodynamics. These observations are used to review data processing procedures and to recommend improved deployment methods, thus improving the precision, reliability, and confidence of EC measurements. Specifically, this study demonstrates that 1) the alignment of the time...... series based on maximum cross correlation improved the precision of EC flux estimations; 2) an oxygen sensor with a response time of

  9. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  10. Precision requirements for single-layer feed-forward neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feedforward neural networks. Easy-to-read equations and simple worst-case estimations for the maximum tolerable imprecision are presented. As an

  11. Precision gravimetric survey at the conditions of urban agglomerations

    Science.gov (United States)

    Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander

    2014-05-01

    Large cities growth and aging lead to the irreversible negative changes of underground. The study of these changes at the urban area mainly based on the shallow methods of Geophysics, which extensive usage restricted by technogenic noise. Among others, precision gravimetry is allocated as method with good resistance to the urban noises. The main the objects of urban gravimetric survey are the soil decompaction, leaded to the rocks strength violation and the karst formation. Their gravity effects are too small, therefore investigation requires the modern high-precision equipment and special methods of measurements. The Gravimetry division of Lomonosov Moscow State University examin of modern precision gravimeters Scintrex CG-5 Autograv since 2006. The main performance characteristics of over 20 precision gravimeters were examined in various operational modes. Stationary mode. Long-term gravimetric measurements were carried at a base station. It shows that records obtained differ by high-frequency and mid-frequency (period 5 - 12 hours) components. The high-frequency component, determined as a standard deviation of measurement, characterizes the level of the system sensitivity to external noise and varies for different devices from 2 to 5-7 μGals. Midrange component, which closely meet to the rest of nonlinearity gravimeter drifts, is partially compensated by the equipment. This factor is very important in the case of gravimetric monitoring or observations, when midrange anomalies are the target ones. For the examined gravimeters, amplitudes' deviations, associated with this parameter may reach 10 μGals. Various transportation modes - were performed by walking (softest mode), lift (vertical overload), vehicle (horizontal overloads), boat (vertical plus horizontal overloads) and helicopter. The survey quality was compared by the variance of the measurement results and internal convergence of series. The measurement results variance (from ±2 to ±4 μGals) and its

  12. The Challenges of Precision Medicine in COPD.

    Science.gov (United States)

    Cazzola, Mario; Calzetta, Luigino; Rogliani, Paola; Matera, Maria Gabriella

    2017-08-01

    Pheno-/endotyping chronic obstructive pulmonary disease (COPD) is really important because it provides patients with precise and personalized medicine. The central concept of precision medicine is to take individual variability into account when making management decisions. Precision medicine should ensure that patients get the right treatment at the right dose at the right time, with minimum harmful consequences and maximum efficacy. Ideally, we should search for genetic and molecular biomarker-based profiles. Given the clinical complexity of COPD, it seems likely that a panel of several biomarkers will be required to characterize pathogenetic factors and their course over time. The need for biomarkers to guide the clinical care of individuals with COPD and to enhance the possibilities of success in drug development is clear and urgent, but biomarker development is tremendously challenging and expensive, and translation of research efforts to date has been largely ineffective. Furthermore, the development of personalized treatments will require a much more detailed understanding of the clinical and biological heterogeneity of COPD. Therefore, we are still far from being able to apply precision medicine in COPD and the treatable traits and FEV 1 -free approaches are attempts to precision medicine in COPD that must be considered still quite unsophisticated.

  13. STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.

  14. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  15. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  16. Study on individual stochastic model of GNSS observations for precise kinematic applications

    Science.gov (United States)

    Próchniewicz, Dominik; Szpunar, Ryszard

    2015-04-01

    The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.

  17. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  18. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  19. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  20. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    Science.gov (United States)

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  1. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  2. Quantitative analysis of factors affecting intraoperative precision and stability of optoelectronic and electromagnetic tracking systems

    International Nuclear Information System (INIS)

    Wagner, A.; Schicho, K.; Birkfellner, W.; Figl, M.; Seemann, R.; Koenig, F.; Kainberger, Franz; Ewers, R.

    2002-01-01

    This study aims to provide a quantitative analysis of the factors affecting the actual precision and stability of optoelectronic and electromagnetic tracking systems in computer-aided surgery under real clinical/intraoperative conditions. A 'phantom-skull' with five precisely determined reference distances between marker spheres is used for all measurements. Three optoelectronic and one electromagnetic tracking systems are included in this study. The experimental design is divided into three parts: (1) evaluation of serial- and multislice-CT (computed tomography) images of the phantom-skull for the precision of distance measurements by means of navigation software without a digitizer, (2) digitizer measurements under realistic intraoperative conditions with the factors OR-lamp (radiating into the field of view of the digitizer) or/and 'handling with ferromagnetic surgical instruments' (in the field of view of the digitizer) and (3) 'point-measurements' to analyze the influence of changes in the angle of inclination of the stylus axis. Deviations between reference distances and measured values are statistically investigated by means of analysis of variance. Computerized measurements of distances based on serial-CT data were more precise than based on multislice-CT data. All tracking systems included in this study proved to be considerably less precise under realistic OR conditions when compared to the technical specifications in the manuals of the systems. Changes in the angle of inclination of the stylus axis resulted in deviations of up to 3.40 mm (mean deviations for all systems ranging from 0.49 to 1.42 mm, variances ranging from 0.09 to 1.44 mm), indicating a strong need for improvements of stylus design. The electromagnetic tracking system investigated in this study was not significantly affected by small ferromagnetic surgical instruments

  3. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  4. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  5. Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids

    DEFF Research Database (Denmark)

    Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt

    2014-01-01

    We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...

  6. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  7. Heritability and variance components of some morphological and agronomic in alfalfa

    International Nuclear Information System (INIS)

    Ates, E.; Tekeli, S.

    2005-01-01

    Four alfalfa cultivars were investigated using randomized complete-block design with three replications. Variance components, variance coefficients and heritability values of some morphological characters, herbage yield, dry matter yield and seed yield were determined. Maximum main stem height (78.69 cm), main stem diameter (4.85 mm), leaflet width (0.93 cm), seeds/pod (6.57), herbage yield (75.64 t ha/sub -1/), dry matter yield (20.06 t ha/sub -1/) and seed yield (0.49 t ha/sub -1/) were obtained from cv. Marina. Leaflet length varied from 1.65 to 2.08 cm. The raceme length measured 3.15 to 4.38 cm in alfalfa cultivars. The highest 1000-seeds weight values (2.42-2.49 g) were found from Marina and Sitel cultivars. Heritability values of various traits were: 91.0% for main stem height, 97.6% for main stem diameter, 81.8% for leaflet length, 88.8% for leaflet width, 90.4% for leaf/stem ratio, 28.3% for racemes/main stem, 99.0% for raceme length, 99.2% for seeds/pod, 88.0% for 1000-seeds weight, 97.2% for herbage yield, 99.6% for dry matter yield and 95.4% for seed yield. (author)

  8. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  9. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  10. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  11. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  12. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  13. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  14. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  15. Evaluation of precision in measurements of uranium isotope ratio by thermionic mass spectrometry

    International Nuclear Information System (INIS)

    Moraes, N.M.P. de; Rodrigues, C.

    1977-01-01

    The parameters which affect the precision and accuracy of uranium isotopic ratios measurements by thermionic mass spectrometry are discussed. A statistical designed program for the analysis of the internal and external variances are presented. It was done an application of this statistical methods, in order to get mass discrimination factor, and its standard mean deviation, by using some results already published for 235 U/ 238 U ratio in NBS uranium samples, and natural uranium [pt

  16. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  17. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  18. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  19. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  20. Do stochastic inhomogeneities affect dark-energy precision measurements?

    Science.gov (United States)

    Ben-Dayan, I; Gasperini, M; Marozzi, G; Nugier, F; Veneziano, G

    2013-01-11

    The effect of a stochastic background of cosmological perturbations on the luminosity-redshift relation is computed to second order through a recently proposed covariant and gauge-invariant light-cone averaging procedure. The resulting expressions are free from both ultraviolet and infrared divergences, implying that such perturbations cannot mimic a sizable fraction of dark energy. Different averages are estimated and depend on the particular function of the luminosity distance being averaged. The energy flux being minimally affected by perturbations at large z is proposed as the best choice for precision estimates of dark-energy parameters. Nonetheless, its irreducible (stochastic) variance induces statistical errors on Ω(Λ)(z) typically lying in the few-percent range.

  1. Do stochastic inhomogeneities affect dark-energy precision measurements?

    CERN Document Server

    Ben-Dayan, Ido; Marozzi, Giovanni; Nugier, Fabien; Veneziano, Gabriele

    2013-01-01

    The effect of a stochastic background of cosmological perturbations on the luminosity-redshift relation is computed to second order through a recently proposed covariant and gauge-invariant light-cone averaging procedure. The resulting expressions are free from both ultraviolet and infrared divergences, implying that such perturbations cannot mimic a sizable fraction of dark energy. Different averages are estimated and depend on the particular function of the luminosity distance being averaged. The energy flux, being minimally affected by perturbations at large z, is proposed as the best choice for precision estimates of dark-energy parameters. Nonetheless, its irreducible (stochastic) variance induces statistical errors on \\Omega_{\\Lambda}(z) typically lying in the few-percent range.

  2. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  3. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  4. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  5. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  6. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  7. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  8. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  9. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  10. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  11. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  12. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  13. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  14. Spatially tuned normalization explains attention modulation variance within neurons.

    Science.gov (United States)

    Ni, Amy M; Maunsell, John H R

    2017-09-01

    Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical

  15. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  16. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  17. Precision manufacturing

    CERN Document Server

    Dornfeld, David

    2008-01-01

    Today there is a high demand for high-precision products. The manufacturing processes are now highly sophisticated and derive from a specialized genre called precision engineering. Precision Manufacturing provides an introduction to precision engineering and manufacturing with an emphasis on the design and performance of precision machines and machine tools, metrology, tooling elements, machine structures, sources of error, precision machining processes and precision process planning. As well as discussing the critical role precision machine design for manufacturing has had in technological developments over the last few hundred years. In addition, the influence of sustainable manufacturing requirements in precision processes is introduced. Drawing upon years of practical experience and using numerous examples and illustrative applications, David Dornfeld and Dae-Eun Lee cover precision manufacturing as it applies to: The importance of measurement and metrology in the context of Precision Manufacturing. Th...

  18. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  19. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  20. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  1. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  2. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  3. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  4. Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force

    Directory of Open Access Journals (Sweden)

    Davidek Pavel

    2018-03-01

    Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.

  5. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  6. Option-4 algorithm for Florida pocket depth probe: reduction in the variance of site-specific probeable crevice depth measurements.

    Science.gov (United States)

    Breen, H J; Rogers, P; Johnson, N W; Slaney, R

    1999-08-01

    Clinical periodontal measurement is plagued by many sources of error which result in aberrant values (outliers). This study sets out to compare probeable crevice depth measurements (PCD) selected by the option-4 algorithm against those recorded with a conventional double-pass method and to quantify any reduction in site-specific PCD variances. A single clinician recorded full-mouth PCD at 1 visit in 32 subjects (mean age 45.5 years) with moderately advanced chronic adult periodontitis. PCD was recorded over 2 passes at 6 sites per tooth with the Florida Pocket Depth Probes, a 3rd generation probe. The option-4 algorithm compared the 1st pass site-specific PCD value (PCD1) to the 2nd pass site-specific PCD value (PCD2) and, if the difference between these values was >1.00 mm, allowed the recording of a maximum of 2 further measurements (3rd and 4th pass measurements PCD3 and PCD4): 4 site-specific measure-meets were considered to be the maximum subject and tissue tolerance. The algorithm selected the 1st 2 measurements whose difference was difference Y) (Y=[(A-B)/A]X 100) and a 75% reduction in the median site-specific variance of PCD1/PCD2.

  7. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  8. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  9. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  11. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  12. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  13. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  14. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    Science.gov (United States)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  15. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    Science.gov (United States)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  16. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  17. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  18. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  19. [Precision nutrition in the era of precision medicine].

    Science.gov (United States)

    Chen, P Z; Wang, H

    2016-12-06

    Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.

  20. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  1. High Precision Testbed to Evaluate Ethernet Performance for In-Car Networks

    DEFF Research Database (Denmark)

    Revsbech, Kasper; Madsen, Tatiana Kozlova; Schiøler, Henrik

    2012-01-01

    Validating safety-critical real-time systems such as in-car networks often involves a model-based performance analysis of the network. An important issue performing such analysis is to provide precise model parameters, matching the actual equipment. One way to obtain such parameters is to derive...... them by measurements of the equipment. In this work we describe the design of a testbed enabling active measurements on up to 1 [Gb=Sec] Copper based Ethernet Switches. By use of the testbed it self, we conduct a series of tests where the precision of the testbed is estimated. We find a maximum error...

  2. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  3. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  4. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  5. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  6. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  7. Multi-approach analysis of maximum riverbed scour depth above subway tunnel

    Directory of Open Access Journals (Sweden)

    Jun Chen

    2010-12-01

    Full Text Available When subway tunnels are routed underneath rivers, riverbed scour may expose the structure, with potentially severe consequences. Thus, it is important to identify the maximum scour depth to ensure that the designed buried depth is adequate. There are a range of methods that may be applied to this problem, including the fluvial process analysis method, geological structure analysis method, scour formula method, scour model experiment method, and numerical simulation method. However, the application ranges and forecasting precision of these methods vary considerably. In order to quantitatively analyze the characteristics of the different methods, a subway tunnel passing underneath a river was selected, and the aforementioned five methods were used to forecast the maximum scour depth. The fluvial process analysis method was used to characterize the river regime and evolution trend, which were the baseline for examination of the scour depth of the riverbed. The results obtained from the scour model experiment and the numerical simulation methods are reliable; these two methods are suitable for application to tunnel projects passing underneath rivers. The scour formula method was less accurate than the scour model experiment method; it is suitable for application to lower risk projects such as pipelines. The results of the geological structure analysis had low precision; the method is suitable for use as a secondary method to assist other research methods. To forecast the maximum scour depth of the riverbed above the subway tunnel, a combination of methods is suggested, and the appropriate analysis method should be chosen with respect to the local conditions.

  8. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  9. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  10. Multiple-objective optimization in precision laser cutting of different thermoplastics

    Science.gov (United States)

    Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.

    2015-04-01

    Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.

  11. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  12. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  13. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  14. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  15. Homogeneity tests for variances and mean test under heterogeneity conditions in a single way ANOVA method

    International Nuclear Information System (INIS)

    Morales P, J.R.; Avila P, P.

    1996-01-01

    If we have consider the maximum permissible levels showed for the case of oysters, it results forbidding to collect oysters at the four stations of the El Chijol Channel ( Veracruz, Mexico), as well as along the channel itself, because the metal concentrations studied exceed these limits. In this case the application of Welch tests were not necessary. For the water hyacinth the means of the treatments were unequal in Fe, Cu, Ni, and Zn. This case is more illustrative, for the conclusion has been reached through the application of the Welch tests to treatments with heterogeneous variances. (Author)

  16. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  17. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  18. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  19. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  20. Variance as a Leading Indicator of Regime Shift in Ecosystem Services

    Directory of Open Access Journals (Sweden)

    William A. Brock

    2006-12-01

    Full Text Available Many environmental conflicts involve pollutants such as greenhouse gas emissions that are dispersed through space and cause losses of ecosystem services. As pollutant emissions rise in one place, a spatial cascade of declining ecosystem services can spread across a larger landscape because of the dispersion of the pollutant. This paper considers the problem of anticipating such spatial regime shifts by monitoring time series of the pollutant or associated ecosystem services. Using such data, it is possible to construct indicators that rise sharply in advance of regime shifts. Specifically, the maximum eigenvalue of the variance-covariance matrix of the multivariate time series of pollutants and ecosystem services rises prior to the regime shift. No specific knowledge of the mechanisms underlying the regime shift is needed to construct the indicator. Such leading indicators of regime shifts could provide useful signals to management agencies or to investors in ecosystem service markets.

  1. A phantom study on temporal and subband Minimum Variance adaptive beamforming

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.

    2014-01-01

    This paper compares experimentally temporal and subband implementations of the Minimum Variance (MV) adaptive beamformer for medical ultrasound imaging. The performance of the two approaches is tested by comparing wire phantom measurements, obtained by the research ultrasound scanner SARUS. A 7 MHz...... BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...... from conventional Delay-and-Sum (DAS) beamformer. FWHM measured at the depth of 46.6 mm, is 0.02 mm (0.09λ) for both adaptive methods while the corresponding values for Hanning and Boxcar weights are 0.64 and 0.44 mm respectively. Between the MV beamformers a -2 dB difference in PSL is noticed in favor...

  2. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  3. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  4. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  5. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  6. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  7. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  8. Precise measurement of velocity dependent friction in rotational motion

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Junaid; Hassan, Hafsa; Shamim, Sohaib; Mahmood, Waqas; Anwar, Muhammad Sabieh, E-mail: sabieh@lums.edu.pk [School of Science and Engineering, Lahore University of Management Sciences (LUMS), Opposite Sector U, D.H.A, Lahore 54792 (Pakistan)

    2011-09-15

    Frictional losses are experimentally determined for a uniform circular disc exhibiting rotational motion. The clockwise and anticlockwise rotations of the disc, that result when a hanger tied to a thread is released from a certain height, give rise to vertical oscillations of the hanger as the thread winds and unwinds over a pulley attached to the disc. It is thus observed how the maximum height is achieved by the hanger decrements in every bounce. From the decrements, the rotational frictional losses are measured. The precision is enhanced by correlating vertical motion with the angular motion. This method leads to a substantial improvement in precision. Furthermore, the frictional torque is shown to be proportional to the angular speed. The experiment has been successfully employed in the undergraduate lab setting.

  9. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  10. Noise measurement from magnitude MRI using local estimates of variance and skewness

    International Nuclear Information System (INIS)

    Rajan, Jeny; Poot, Dirk; Juntu, Jaber; Sijbers, Jan

    2010-01-01

    In this note, we address the estimation of the noise level in magnitude magnetic resonance (MR) images in the absence of background data. Most of the methods proposed earlier exploit the Rayleigh distributed background region in MR images to estimate the noise level. These methods, however, cannot be used for images where no background information is available. In this note, we propose two different approaches for noise level estimation in the absence of the image background. The first method is based on the local estimation of the noise variance using maximum likelihood estimation and the second method is based on the local estimation of the skewness of the magnitude data distribution. Experimental results on synthetic and real MR image datasets show that the proposed estimators accurately estimate the noise level in a magnitude MR image, even without background data. (note)

  11. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  12. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  13. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  14. Variance heterogeneity in Saccharomyces cerevisiae expression data: trans-regulation and epistasis.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E; Li, Xidan; Carlborg, Örjan

    2013-01-01

    Here, we describe the results from the first variance heterogeneity Genome Wide Association Study (VGWAS) on yeast expression data. Using this forward genetics approach, we show that the genetic regulation of gene-expression in the budding yeast, Saccharomyces cerevisiae, includes mechanisms that can lead to variance heterogeneity in the expression between genotypes. Additionally, we performed a mean effect association study (GWAS). Comparing the mean and variance heterogeneity analyses, we find that the mean expression level is under genetic regulation from a larger absolute number of loci but that a higher proportion of the variance controlling loci were trans-regulated. Both mean and variance regulating loci cluster in regulatory hotspots that affect a large number of phenotypes; a single variance-controlling locus, mapping close to DIA2, was found to be involved in more than 10% of the significant associations. It has been suggested in the literature that variance-heterogeneity between the genotypes might be due to genetic interactions. We therefore screened the multi-locus genotype-phenotype maps for several traits where multiple associations were found, for indications of epistasis. Several examples of two and three locus genetic interactions were found to involve variance-controlling loci, with reports from the literature corroborating the functional connections between the loci. By using a new analytical approach to re-analyze a powerful existing dataset, we are thus able to both provide novel insights to the genetic mechanisms involved in the regulation of gene-expression in budding yeast and experimentally validate epistasis as an important mechanism underlying genetic variance-heterogeneity between genotypes.

  15. Modified unscented Kalman filter using modified filter gain and variance scale factor for highly maneuvering target tracking

    Institute of Scientific and Technical Information of China (English)

    Changyun Liu; Penglang Shui; Gang Wei; Song Li

    2014-01-01

    To improve the low tracking precision caused by lagged filter gain or imprecise state noise when the target highly maneu-vers, a modified unscented Kalman filter algorithm based on the improved filter gain and adaptive scale factor of state noise is pre-sented. In every filter process, the estimated scale factor is used to update the state noise covariance Qk, and the improved filter gain is obtained in the filter process of unscented Kalman filter (UKF) via predicted variance Pk|k-1, which is similar to the standard Kalman filter. Simulation results show that the proposed algorithm provides better accuracy and ability to adapt to the highly maneu-vering target compared with the standard UKF.

  16. Prediction of Pressing Quality for Press-Fit Assembly Based on Press-Fit Curve and Maximum Press-Mounting Force

    Directory of Open Access Journals (Sweden)

    Bo You

    2015-01-01

    Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.

  17. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  18. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  19. Development of precision elliptic neutron-focusing supermirror.

    Science.gov (United States)

    Hosobata, Takuya; Yamada, Norifumi L; Hino, Masahiro; Yamagata, Yutaka; Kawai, Toshihide; Yoshinaga, Hisao; Hori, Koichiro; Takeda, Masahiro; Takeda, Shin; Morita, Shin-Ya

    2017-08-21

    This paper details methods for the precision design and fabrication of neutron-focusing supermirrors, based on electroless nickel plating. We fabricated an elliptic mirror for neutron reflectometry, which is our second mirror improved from the first. The mirror is a 550-millimeter-long segmented mirror assembled using kinematic couplings, with each segment figured by diamond cutting, polished using colloidal silica, and supermirror coated through ion-beam sputtering. The mirror was evaluated with neutron beams, and the reflectivity was found to be 68-90% at a critical angle. The focusing width was 0.17 mm at the full width at half maximum.

  20. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  1. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  2. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    Science.gov (United States)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  3. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  8. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  9. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  10. Exploring variance in residential electricity consumption: Household features and building properties

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Odlare, Monica; Wallin, Fredrik; Wester, Lars

    2012-01-01

    Highlights: ► Statistical analysis of variance are of considerable value in identifying key indicators for policy update. ► Variance in residential electricity use is partly explained by household features. ► Variance in residential electricity use is partly explained by building properties. ► Household behavior has a profound impact on individual electricity use. -- Abstract: Improved means of controlling electricity consumption plays an important part in boosting energy efficiency in the Swedish power market. Developing policy instruments to that end requires more in-depth statistics on electricity use in the residential sector, among other things. The aim of the study has accordingly been to assess the extent of variance in annual electricity consumption in single-family homes as well as to estimate the impact of household features and building properties in this respect using independent samples t-tests and one-way as well as univariate independent samples analyses of variance. Statistically significant variances associated with geographic area, heating system, number of family members, family composition, year of construction, electric water heater and electric underfloor heating have been established. The overall result of the analyses is nevertheless that variance in residential electricity consumption cannot be fully explained by independent variables related to household and building characteristics alone. As for the methodological approach, the results further suggest that methods for statistical analysis of variance are of considerable value in indentifying key indicators for policy update and development.

  11. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  12. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  13. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  14. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  15. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  16. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  17. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  18. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  19. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    Science.gov (United States)

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting

  20. High Precision Edge Detection Algorithm for Mechanical Parts

    Science.gov (United States)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  1. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  2. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  3. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  4. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  5. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  6. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  7. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  8. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  9. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  10. Fast-PPP assessment in European and equatorial region near the solar cycle maximum

    Science.gov (United States)

    Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume

    2014-05-01

    The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with

  11. Experimental performance assessment of the sub-band minimum variance beamformer for ultrasound imaging

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom

    2017-01-01

    Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides...... the broadband ultrasound signals into sub-bands (MVS) to conform with the narrow-band assumption of the original MV theory. This approach is investigated here using experimental Synthetic Aperture (SA) data from wire and cyst phantoms. A 7 MHz linear array transducer is used with the SARUS experimental...... ultrasound scanner for the data acquisition. The lateral resolution and the contrast obtained, are evaluated and compared with those from the conventional Delay-and-Sum (DAS) beamformer and the MV temporal implementation (MVT). From the wire phantom the Full-Width-at-Half-Maximum (FWHM) measured at a depth...

  12. Precision of dosimetry-related measurements obtained on current multidetector computed tomography scanners

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; McNitt-Gray, Michael F.; Zhang, Di; Kim, Hyun J.; Cody, Dianna D.

    2010-01-01

    Purpose: Computed tomography (CT) intrascanner and interscanner variability has not been well characterized. Thus, the purpose of this study was to examine the within-run, between-run, and between-scanner precision of physical dosimetry-related measurements collected over the course of 1 yr on three different makes and models of multidetector row CT (MDCT) scanners. Methods: Physical measurements were collected using nine CT scanners (three scanners each of GE VCT, GE LightSpeed 16, and Siemens Sensation 64 CT). Measurements were made using various combinations of technical factors, including kVp, type of bowtie filter, and x-ray beam collimation, for several dosimetry-related quantities, including (a) free-in-air CT dose index (CTDI 100,air ); (b) calculated half-value layers and quarter-value layers; and (c) weighted CT dose index (CTDI w ) calculated from exposure measurements collected in both a 16 and 32 cm diameter CTDI phantom. Data collection was repeated at several different time intervals, ranging from seconds (for CTDI 100,air values) to weekly for 3 weeks and then quarterly or triannually for 1 yr. Precision of the data was quantified by the percent coefficient of variation (%CV). Results: The maximum relative precision error (maximum %CV value) across all dosimetry metrics, time periods, and scanners included in this study was 4.33%. The median observed %CV values for CTDI 100,air ranged from 0.05% to 0.19% over several seconds, 0.12%-0.52% over 1 week, and 0.58%-2.31% over 3-4 months. For CTDI w for a 16 and 32 cm CTDI phantom, respectively, the range of median %CVs was 0.38%-1.14% and 0.62%-1.23% in data gathered weekly for 3 weeks and 1.32%-2.79% and 0.84%-2.47% in data gathered quarterly or triannually for 1 yr. Conclusions: From a dosimetry perspective, the MDCT scanners tested in this study demonstrated a high degree of within-run, between-run, and between-scanner precision (with relative precision errors typically well under 5%).

  13. Precision Oncology: Between Vaguely Right and Precisely Wrong.

    Science.gov (United States)

    Brock, Amy; Huang, Sui

    2017-12-01

    Precision Oncology seeks to identify and target the mutation that drives a tumor. Despite its straightforward rationale, concerns about its effectiveness are mounting. What is the biological explanation for the "imprecision?" First, Precision Oncology relies on indiscriminate sequencing of genomes in biopsies that barely represent the heterogeneous mix of tumor cells. Second, findings that defy the orthodoxy of oncogenic "driver mutations" are now accumulating: the ubiquitous presence of oncogenic mutations in silent premalignancies or the dynamic switching without mutations between various cell phenotypes that promote progression. Most troublesome is the observation that cancer cells that survive treatment still will have suffered cytotoxic stress and thereby enter a stem cell-like state, the seeds for recurrence. The benefit of "precision targeting" of mutations is inherently limited by this counterproductive effect. These findings confirm that there is no precise linear causal relationship between tumor genotype and phenotype, a reminder of logician Carveth Read's caution that being vaguely right may be preferable to being precisely wrong. An open-minded embrace of the latest inconvenient findings indicating nongenetic and "imprecise" phenotype dynamics of tumors as summarized in this review will be paramount if Precision Oncology is ultimately to lead to clinical benefits. Cancer Res; 77(23); 6473-9. ©2017 AACR . ©2017 American Association for Cancer Research.

  14. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  15. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  16. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  17. Improving Precision and Reducing Runtime of Microscopic Traffic Simulators through Stratified Sampling

    Directory of Open Access Journals (Sweden)

    Khewal Bhupendra Kesur

    2013-01-01

    Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.

  18. Ion energy loss at maximum stopping power in a laser-generated plasma

    International Nuclear Information System (INIS)

    Cayzac, W.

    2013-01-01

    In the frame of this thesis, a new experimental setup for the measurement of the energy loss of carbon ions at maximum stopping power in a hot laser-generated plasma has been developed and successfully tested. In this parameter range where the projectile velocity is of the same order of magnitude as the thermal velocity of the plasma free electrons, large uncertainties of up to 50% are present in the stopping-power description. To date, no experimental data are available to perform a theory benchmarking. Testing the different stopping theories is yet essential for inertial confinement fusion and in particular for the understanding of the alpha-particle heating of the thermonuclear fuel. Here, for the first time, precise measurements were carried out in a reproducible and entirely characterized beam-plasma configuration. It involved a nearly fully-stripped ion beam probing a homogeneous fully-ionized plasma. This plasma was generated by irradiating a thin carbon foil with two high-energy laser beams and features a maximum electron temperature of 200 eV. The plasma conditions were simulated with a two-dimensional radiative hydrodynamic code, while the ion-beam charge-state distribution was predicted by means of a Monte-Carlo code describing the charge-exchange processes of projectile ions in plasma. To probe at maximum stopping power, high-frequency pulsed ion bunches were decelerated to an energy of 0.5 MeV per nucleon. The ion energy loss was determined by a time-of-flight measurement using a specifically developed chemical-vapor-deposition diamond detector that was screened against any plasma radiation. A first experimental campaign was carried out using this newly developed platform, in which a precision better than 200 keV on the energy loss was reached. This allowed, via the knowledge of the plasma and of the beam parameters, to reliably test several stopping theories, either based on perturbation theory or on a nonlinear T-Matrix formalism. A preliminary

  19. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  20. Python Leap Second Management and Implementation of Precise Barycentric Correction (barycorrpy)

    Science.gov (United States)

    Kanodia, Shubham; Wright, Jason

    2018-01-01

    We announce barycorrpy (BCPy) , a Python implementation to calculate precise barycentric corrections well below the 1 cm/s level, following the algorithm of Wright and Eastman (2014). This level of precision is required in the search for 1 Earth mass planets in the Habitable Zones of Sun-like stars by the Radial Velocity (RV) method, where the maximum semi-amplitude is about 9 cm/s. We have developed BCPy to be used in the pipeline for the next generation Doppler Spectrometers - Habitable-zone Planet Finder (HPF) and NEID. In this work, we also develop an automated leap second management routine to improve upon the one available in Astropy. It checks for and downloads a new leap second file before converting from the UT time scale to TDB.

  1. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  2. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  3. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  4. Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles

    Directory of Open Access Journals (Sweden)

    Paulo H. Egydio

    2008-01-01

    Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.

  5. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  6. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  7. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  8. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  9. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  10. Coupled bias-variance tradeoff for cross-pose face recognition.

    Science.gov (United States)

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.

  11. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  12. Precision grid and hand motion for accurate needle insertion in brachytherapy

    International Nuclear Information System (INIS)

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.; McLaughlin, Patrick W.; Shih, Albert J.

    2011-01-01

    Purpose: In prostate brachytherapy, a grid is used to guide a needle tip toward a preplanned location within the tissue. During insertion, the needle deflects en route resulting in target misplacement. In this paper, 18-gauge needle insertion experiments into phantom were performed to test effects of three parameters, which include the clearance between the grid hole and needle, the thickness of the grid, and the needle insertion speed. Measurement apparatus that consisted of two datum surfaces and digital depth gauge was developed to quantify needle deflections. Methods: The gauge repeatability and reproducibility (GR and R) test was performed on the measurement apparatus, and it proved to be capable of measuring a 2 mm tolerance from the target. Replicated experiments were performed on a 2 3 factorial design (three parameters at two levels) and analysis included averages and standard deviation along with an analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: Results showed that grid with tight clearance hole and slow needle speed increased precision and accuracy of needle insertion. The tight grid was vital to enhance precision and accuracy of needle insertion for both slow and fast insertion speed; additionally, at slow speed the tight, thick grid improved needle precision and accuracy. Conclusions: In summary, the tight grid is important, regardless of speed. The grid design, which shows the capability to reduce the needle deflection in brachytherapy procedures, can potentially be implemented in the brachytherapy procedure.

  13. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  14. Precise analytic approximations for the Bessel function J1 (x)

    Science.gov (United States)

    Maass, Fernando; Martin, Pablo

    2018-03-01

    Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.

  15. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    Science.gov (United States)

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  16. Explicit formulas for the variance of discounted life-cycle cost

    International Nuclear Information System (INIS)

    Noortwijk, Jan M. van

    2003-01-01

    In life-cycle costing analyses, optimal design is usually achieved by minimising the expected value of the discounted costs. As well as the expected value, the corresponding variance may be useful for estimating, for example, the uncertainty bounds of the calculated discounted costs. However, general explicit formulas for calculating the variance of the discounted costs over an unbounded time horizon are not yet available. In this paper, explicit formulas for this variance are presented. They can be easily implemented in software to optimise structural design and maintenance management. The use of the mathematical results is illustrated with some examples

  17. Increasing precision of turbidity-based suspended sediment concentration and load estimates.

    Science.gov (United States)

    Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E

    2010-01-01

    Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.

  18. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  19. The Relationship Between Maximum Isometric Strength and Ball Velocity in the Tennis Serve

    Directory of Open Access Journals (Sweden)

    Baiget Ernest

    2016-12-01

    Full Text Available The aims of this study were to analyze the relationship between maximum isometric strength levels in different upper and lower limb joints and serve velocity in competitive tennis players as well as to develop a prediction model based on this information. Twelve male competitive tennis players (mean ± SD; age: 17.2 ± 1.0 years; body height: 180.1 ± 6.2 cm; body mass: 71.9 ± 5.6 kg were tested using maximum isometric strength levels (i.e., wrist, elbow and shoulder flexion and extension; leg and back extension; shoulder external and internal rotation. Serve velocity was measured using a radar gun. Results showed a strong positive relationship between serve velocity and shoulder internal rotation (r = 0.67; p < 0.05. Low to moderate correlations were also found between serve velocity and wrist, elbow and shoulder flexion – extension, leg and back extension and shoulder external rotation (r = 0.36 – 0.53; p = 0.377 – 0.054. Bivariate and multivariate models for predicting serve velocity were developed, with shoulder flexion and internal rotation explaining 55% of the variance in serve velocity (r = 0.74; p < 0.001. The maximum isometric strength level in shoulder internal rotation was strongly related to serve velocity, and a large part of the variability in serve velocity was explained by the maximum isometric strength levels in shoulder internal rotation and shoulder flexion.

  20. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  1. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  2. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  3. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  4. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

    Directory of Open Access Journals (Sweden)

    Nicholas Scott Cardell

    2013-05-01

    Full Text Available Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

  5. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  6. High Precision Edge Detection Algorithm for Mechanical Parts

    Directory of Open Access Journals (Sweden)

    Duan Zhenyun

    2018-04-01

    Full Text Available High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  7. SKLUST device for high-precision gluing of MWPC

    International Nuclear Information System (INIS)

    Amaglobeli, N.S.; Burov, R.V.; Sakandelidze, R.M.; Sakhelashvili, T.M.; Chiladze, B.G.; Glonti, G.L.; Glonti, L.N.

    2005-01-01

    The SKLUST device has been created for gluing precision plane-parallel anode, cathode of spacer bars and integral anode and cathode frames of the MWPCs or flat surfaces of the large-area cathode planes for them in the case that thin copper clad stesalit or glass-cloth-base laminate is used as the cathode, for example, for the CSC chambers. In contrast to usual gluing, in this device the glued components are not pressed to each other. SKLUST allows making high-precision products in laboratory conditions without preliminarily machining its components and receiving a precision article practically for any area at the plane parallelism from ±0.030 up to ±0.006 mm using a non-calibrated sheet of the foiled (or unfoiled) stesalit, glass-cloth-base laminate or other flexible materials to a tolerance for the thickness ±0.2-0.5 mm or worse. On the biggest of the existing devices it is possible to fabricate an article with the maximal sizes 2400x250 mm 2 at the thickness accuracy (6±0.015) mm (maximum deviation). Whereas in the technological cycle machining of blanks to the thickness or application of exact blanks is completely excluded, the manufacturing process becomes simpler, and the price of the articles essentially reduces, especially for mass production

  8. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  9. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  10. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  11. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    Science.gov (United States)

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  12. 29 CFR 1926.2 - Variances from safety and health standards.

    Science.gov (United States)

    2010-07-01

    ... from safety and health standards. (a) Variances from standards which are, or may be, published in this... 29 Labor 8 2010-07-01 2010-07-01 false Variances from safety and health standards. 1926.2 Section 1926.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  13. Precision of fit between implant impression coping and implant replica pairs for three implant systems.

    Science.gov (United States)

    Nicoll, Roxanna J; Sun, Albert; Haney, Stephan; Turkyilmaz, Ilser

    2013-01-01

    The fabrication of an accurately fitting implant-supported fixed prosthesis requires multiple steps, the first of which is assembling the impression coping on the implant. An imprecise fit of the impression coping on the implant will cause errors that will be magnified in subsequent steps of prosthesis fabrication. The purpose of this study was to characterize the 3-dimensional (3D) precision of fit between impression coping and implant replica pairs for 3 implant systems. The selected implant systems represent the 3 main joint types used in implant dentistry: external hexagonal, internal trilobe, and internal conical. Ten impression copings and 10 implant replicas from each of the 3 systems, B (Brånemark System), R (NobelReplace Select), and A (NobelActive) were paired. A standardized aluminum test body was luted to each impression coping, and the corresponding implant replica was embedded in a stone base. A coordinate measuring machine was used to quantify the maximum range of displacement in a vertical direction as a function of the tightening force applied to the guide pin. Maximum angular displacement in a horizontal plane was measured as a function of manual clockwise or counterclockwise rotation. Vertical and rotational positioning was analyzed by using 1-way analysis of variance (ANOVA). The Fisher protected least significant difference (PLSD) multiple comparisons test of the means was applied when the F-test in the ANOVA was significant (α=.05). The mean and standard deviation for change in the vertical positioning of impression copings was 4.3 ±2.1 μm for implant system B, 2.8 ±4.2 μm for implant system R, and 20.6 ±8.8 μm for implant system A. The mean and standard deviation for rotational positioning was 3.21 ±0.98 degrees for system B, 2.58 ±1.03 degrees for system R, and 5.30 ±0.79 degrees for system A. The P-value for vertical positioning between groups A and B and between groups A and R was <.001. No significant differences were found for

  14. High precision electrostatic potential calculations for cylindrically symmetric lenses

    International Nuclear Information System (INIS)

    Edwards, David Jr.

    2007-01-01

    A method is developed for a potential calculation within cylindrically symmetric electrostatic lenses using mesh relaxation techniques, and it is capable of considerably higher accuracies than currently available. The method involves (i) creating very high order algorithms (orders of 6, 8, and 10) for determining the potentials at points in the net using surrounding point values, (ii) eliminating the effect of the large errors caused by singular points, and (iii) reducing gradients in the high gradient regions of the geometry, thereby allowing the algorithms used in these regions to achieve greater precisions--(ii) and (iii) achieved by the use of telescopic multiregions. In addition, an algorithm for points one unit from a metal surface is developed, allowing general mesh point algorithms to be used in these situations, thereby taking advantage of the enhanced precision of the latter. A maximum error function dependent on a sixth order gradient of the potential is defined. With this the single point algorithmic errors are able to be viewed over the entire net. Finally, it is demonstrated that by utilizing the above concepts and procedures, the potential of a point in a reasonably high gradient region of a test geometry can realize a precision of less than 10 -10

  15. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    Science.gov (United States)

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  16. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  17. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  19. Quantitative gel electrophoresis: new records in precision by elaborated staining and detection protocols.

    Science.gov (United States)

    Deng, Xi; Schröder, Simone; Redweik, Sabine; Wätzig, Hermann

    2011-06-01

    Gel electrophoresis (GE) is a very common analytical technique for proteome research and protein analysis. Despite being developed decades ago, there is still a considerable need to improve its precision. Using the fluorescence of Colloidal Coomassie Blue -stained proteins in near-infrared (NIR), the major error source caused by the unpredictable background staining is strongly reduced. This result was generalized for various types of detectors. Since GE is a multi-step procedure, standardization of every single step is required. After detailed analysis of all steps, the staining and destaining were identified as the major source of the remaining variation. By employing standardized protocols, pooled percent relative standard deviations of 1.2-3.1% for band intensities were achieved for one-dimensional separations in repetitive experiments. The analysis of variance suggests that the same batch of staining solution should be used for gels of one experimental series to minimize day-to-day variation and to obtain high precision. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  1. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  2. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  3. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  4. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  5. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  6. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  7. Investigating the minimum achievable variance in a Monte Carlo criticality calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2008-07-01

    The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)

  8. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    Science.gov (United States)

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  9. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    Science.gov (United States)

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (psmartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (psmartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (psmartphone system (psmartphone and McMaster counts did not have statistically different accuracies

  10. Expected precision of neutron multiplicity measurements of waste drums

    International Nuclear Information System (INIS)

    Ensslin, N.; Krick, M.S.; Menlove, H.O.

    1995-01-01

    DOE facilities are beginning to apply passive neutron multiplicity counting techniques to the assay of plutonium scrap and residues. There is also considerable interest in applying this new measurement technique to 208-liter waste drums. The additional information available from multiplicity counting could flag the presence of shielding materials or improve assay accuracy by correcting for matrix effects such as (α,n) induced fission or detector efficiency variations. The potential for multiplicity analysis of waste drums, and the importance of better detector design, can be estimated by calculating the expected assay precision using a Figure of Merit code for assay variance. This paper reports results obtained as a function of waste drum content and detector characteristics. We find that multiplicity analysis of waste drums is feasible if a high-efficiency neutron counter is used. However, results are significantly poorer if the multiplicity analysis must be used to solve for detection efficiency

  11. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  12. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  13. Real-time Nyquist signaling with dynamic precision and flexible non-integer oversampling.

    Science.gov (United States)

    Schmogrow, R; Meyer, M; Schindler, P C; Nebendahl, B; Dreschmann, M; Meyer, J; Josten, A; Hillerkuss, D; Ben-Ezra, S; Becker, J; Koos, C; Freude, W; Leuthold, J

    2014-01-13

    We demonstrate two efficient processing techniques for Nyquist signals, namely computation of signals using dynamic precision as well as arbitrary rational oversampling factors. With these techniques along with massively parallel processing it becomes possible to generate and receive high data rate Nyquist signals with flexible symbol rates and bandwidths, a feature which is highly desirable for novel flexgrid networks. We achieved maximum bit rates of 252 Gbit/s in real-time.

  14. Maximum intensity projection MR angiography using shifted image data

    International Nuclear Information System (INIS)

    Machida, Yoshio; Ichinose, Nobuyasu; Hatanaka, Masahiko; Goro, Takehiko; Kitake, Shinichi; Hatta, Junicchi.

    1992-01-01

    The quality of MR angiograms has been significantly improved in past several years. Spatial resolution, however, is not sufficient for clinical use. On the other hand, MR image data can be filled at anywhere using Fourier shift theorem, and the quality of multi-planar reformed image has been reported to be improved remarkably using 'shifted data'. In this paper, we have clarified the efficiency of 'shifted data' for maximum intensity projection MR angiography. Our experimental studies and theoretical consideration showd that the quality of MR angiograms has been significantly improved using 'shifted data' as follows; 1) remarkable reduction of mosaic artifact, 2) improvement of spatial continuity for the blood vessels, and 3) reduction of variance for the signal intensity along the blood vessels. In other words, the angiograms looks much 'finer' than conventional ones, although the spatial resolution is not improved theoretically. Furthermore, we found the quality of MR angiograms dose not improve significantly with the 'shifted data' more than twice as dense as ordinal ones. (author)

  15. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  16. Temperature dependence of attitude sensor coalignments on the Solar Maximum Mission (SMM)

    Science.gov (United States)

    Pitone, D. S.; Eudell, A. H.; Patt, F. S.

    1990-01-01

    The temperature correlation of the relative coalignment between the fine-pointing sun sensor and fixed-head star trackers measured on the Solar Maximum Mission (SMM) is analyzed. An overview of the SMM, including mission history and configuration, is given. Possible causes of the misalignment variation are discussed, with focus placed on spacecraft bending due to solar-radiation pressure, electronic or mechanical changes in the sensors, uncertainty in the attitude solutions, and mounting-plate expansion and contraction due to thermal effects. Yaw misalignment variation from the temperature profile is assessed, and suggestions for spacecraft operations are presented, involving methods to incorporate flight measurements of the temperature-versus-alignment function and its variance in operational procedures and the spacecraft structure temperatures in the attitude telemetry record.

  17. Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Anna A. Igolkina

    2018-06-01

    Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology

  18. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    Science.gov (United States)

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  19. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  20. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  1. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  2. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  3. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  4. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  5. Precision medicine in myasthenia graves: begin from the data precision

    Science.gov (United States)

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  6. Ultra-precise tracking control of piezoelectric actuators via a fuzzy hysteresis model.

    Science.gov (United States)

    Li, Pengzhi; Yan, Feng; Ge, Chuan; Zhang, Mingchao

    2012-08-01

    In this paper, a novel Takagi-Sugeno (T-S) fuzzy system based model is proposed for hysteresis in piezoelectric actuators. The antecedent and consequent structures of the fuzzy hysteresis model (FHM) can be, respectively, identified on-line through uniform partition approach and recursive least squares (RLS) algorithm. With respect to controller design, the inverse of FHM is used to develop a feedforward controller to cancel out the hysteresis effect. Then a hybrid controller is designed for high-performance tracking. It combines the feedforward controller with a proportional integral differential (PID) controller favourable for stabilization and disturbance compensation. To achieve nanometer-scale tracking precision, the enhanced adaptive hybrid controller is further developed. It uses real-time input and output data to update FHM, thus changing the feedforward controller to suit the on-site hysteresis character of the piezoelectric actuator. Finally, as to 3 cases of 50 Hz sinusoidal, multiple frequency sinusoidal and 50 Hz triangular trajectories tracking, experimental results demonstrate the efficiency of the proposed controllers. Especially, being only 0.35% of the maximum desired displacement, the maximum error of 50 Hz sinusoidal tracking is greatly reduced to 5.8 nm, which clearly shows the ultra-precise nanometer-scale tracking performance of the developed adaptive hybrid controller.

  7. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  8. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  9. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  10. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  11. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  12. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  13. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  14. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  15. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  16. Experimental study on an FBG strain sensor

    Science.gov (United States)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  17. Precision machining commercialization

    International Nuclear Information System (INIS)

    1978-01-01

    To accelerate precision machining development so as to realize more of the potential savings within the next few years of known Department of Defense (DOD) part procurement, the Air Force Materials Laboratory (AFML) is sponsoring the Precision Machining Commercialization Project (PMC). PMC is part of the Tri-Service Precision Machine Tool Program of the DOD Manufacturing Technology Five-Year Plan. The technical resources supporting PMC are provided under sponsorship of the Department of Energy (DOE). The goal of PMC is to minimize precision machining development time and cost risk for interested vendors. PMC will do this by making available the high precision machining technology as developed in two DOE contractor facilities, the Lawrence Livermore Laboratory of the University of California and the Union Carbide Corporation, Nuclear Division, Y-12 Plant, at Oak Ridge, Tennessee

  18. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  19. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  20. Precision of bone densitometry measurements: When is change true change and does it vary across bone density values?

    International Nuclear Information System (INIS)

    Wong, J.C.H.; Griffiths, M.R.

    2003-01-01

    The precision error of the bone densitometer is used to interpret significant change in bone mineral density (BMD) in serial studies. The precision error can be expressed as standard deviation (SD) or coefficient of variation (CV). The aims of this study are to determine the precision error over a range of BMD values and to demonstrate the application of the precision error in clinical practice. A bone phantom was used consisting of a perspex block with eight compartments containing varying amounts of hydroxyapatite powder to simulate a range of bone densities. The block was scanned 21 times and manual regions placed over each compartment to measure the BMD in each compartment. There were no significant differences in the variances or SD for all eight compartments, that is, over the range of BMD normally encountered in clinical practice. However, the calculated CV show a progressive fall in values as the BMD rises. Therefore, the SD should be used to calculate significant BMD change. In a practise with quality control procedures in place to detect calibration drift and with appropriately trained personnel, a change of approximately 0.05 g/cm 2 is generally regarded as being a significant change at a 95% confidence level. Copyright (2003) Blackwell Science Pty Ltd

  1. A class of multi-period semi-variance portfolio for petroleum exploration and development

    Science.gov (United States)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  2. Bayesian evaluation of constrained hypotheses on variances of multiple independent groups

    NARCIS (Netherlands)

    Böing-Messing, F.; van Assen, M.A.L.M.; Hofman, A.D.; Hoijtink, H.; Mulder, J.

    2017-01-01

    Research has shown that independent groups often differ not only in their means, but also in their variances. Comparing and testing variances is therefore of crucial importance to understand the effect of a grouping variable on an outcome variable. Researchers may have specific expectations

  3. Analysis of conditional genetic effects and variance components in developmental genetics.

    Science.gov (United States)

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  4. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  5. Multicore fibre photonic lanterns for precision radial velocity Science

    Science.gov (United States)

    Gris-Sánchez, Itandehui; Haynes, Dionne M.; Ehrlich, Katjana; Haynes, Roger; Birks, Tim A.

    2018-04-01

    Incomplete fibre scrambling and fibre modal noise can degrade high-precision spectroscopic applications (typically high spectral resolution and high signal to noise). For example, it can be the dominating error source for exoplanet finding spectrographs, limiting the maximum measurement precision possible with such facilities. This limitation is exacerbated in the next generation of infra-red based systems, as the number of modes supported by the fibre scales inversely with the wavelength squared and more modes typically equates to better scrambling. Substantial effort has been made by major research groups in this area to improve the fibre link performance by employing non-circular fibres, double scramblers, fibre shakers, and fibre stretchers. We present an original design of a multicore fibre (MCF) terminated with multimode photonic lantern ports. It is designed to act as a relay fibre with the coupling efficiency of a multimode fibre (MMF), modal stability similar to a single-mode fibre and low loss in a wide range of wavelengths (380 nm to 860 nm). It provides phase and amplitude scrambling to achieve a stable near field and far-field output illumination pattern despite input coupling variations, and low modal noise for increased stability for high signal-to-noise applications such as precision radial velocity (PRV) science. Preliminary results are presented for a 511-core MCF and compared with current state of the art octagonal fibre.

  6. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  7. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  8. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  9. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  10. Interpretation of the depths of maximum of extensive air showers measured by the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Abreu, Pedro; et al.

    2013-02-01

    To interpret the mean depth of cosmic ray air shower maximum and its dispersion, we parametrize those two observables as functions of the first two moments of the ln A distribution. We examine the goodness of this simple method through simulations of test mass distributions. The application of the parameterization to Pierre Auger Observatory data allows one to study the energy dependence of the mean ln A and of its variance under the assumption of selected hadronic interaction models. We discuss possible implications of these dependences in term of interaction models and astrophysical cosmic ray sources.

  11. The efficiency of the crude oil markets: Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.

  12. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  13. The efficiency of the crude oil markets. Evidence from variance ratio tests

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)

    2009-11-15

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  14. Why precision?

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes

    2012-05-15

    Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.

  15. Why precision?

    International Nuclear Information System (INIS)

    Bluemlein, Johannes

    2012-05-01

    Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.

  16. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  17. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    Science.gov (United States)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision

  18. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  19. Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations

    International Nuclear Information System (INIS)

    Soran, P.D.; McKeon, D.C.; Booth, T.E.

    1989-07-01

    Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab

  20. Precision mechatronics based on high-precision measuring and positioning systems and machines

    Science.gov (United States)

    Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert

    2007-06-01

    Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.

  1. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  2. High-precision calculation of the strange nucleon electromagnetic form factors

    Energy Technology Data Exchange (ETDEWEB)

    Green, Jeremy [Johannes Gutenberg Univ., Mainz (Germany); Meinel, Stefan [Univ. of Arizona, Tucson, AZ (United States); Brookhaven National Lab. (BNL), Upton, NY (United States); Engelhardt, Michael G. [New Mexico State Univ., Las Cruces, NM (United States); Krieg, Stefan [Bergische Univ., Wuppertal (Germany); Julich Supercomputing Centre, Julich (Germany); Laeuchli, Jesse [College of William and Mary, Williamsburg, VA (United States); Negele, John W. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Orginos, Kostas [College of William and Mary, Williamsburg, VA (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Pochinsky, Andrew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Syritsyn, Sergey [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2015-08-26

    We report a direct lattice QCD calculation of the strange nucleon electromagnetic form factors GsE and GsM in the kinematic range 0 ≤ Q2 ≤ 1.2GeV2. For the first time, both GsE and GsM are shown to be nonzero with high significance. This work uses closer-to-physical lattice parameters than previous calculations, and achieves an unprecented statistical precision by implementing a recently proposed variance reduction technique called hierarchical probing. We perform model-independent fits of the form factor shapes using the z-expansion and determine the strange electric and magnetic radii and magnetic moment. As a result, we compare our results to parity-violating electron-proton scattering data and to other theoretical studies.

  3. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  4. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    Science.gov (United States)

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  5. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  6. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  7. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  8. Investigation of Reduction of the Uncertainty of Monte Carlo Dose Calculations in Oncor® Clinical Linear Accelerator Simulation Using the DBS Variance Reduction Technique in Monte Carlo Code BEAMnrc

    Directory of Open Access Journals (Sweden)

    Amin Asadi

    2017-10-01

    Full Text Available Purpose: To study the benefits of Directional Bremsstrahlung Splitting (DBS dose variance reduction technique in BEAMnrc Monte Carlo (MC code for Oncor® linac at 6MV and 18MV energies. Materials and Method: A MC model of Oncor® linac was built using BEAMnrc MC Code and verified by the measured data for 6MV and 18MV energies of various field sizes. Then Oncor® machine was modeled running DBS technique, and the efficiency of total fluence and spatial fluence for electron and photon, the efficiency of dose variance reduction of MC calculations for PDD on the central beam axis and lateral dose profile across the nominal field was measured and compared. Result: With applying DBS technique, the total fluence of electron and photon increased in turn 626.8 (6MV and 983.4 (6MV, and 285.6 (18MV and 737.8 (18MV, the spatial fluence of electron and photon improved in turn 308.6±1.35% (6MV and 480.38±0.43% (6MV, and 153±0.9% (18MV and 462.6±0.27% (18MV. Moreover, by running DBS technique, the efficiency of dose variance reduction for PDD MC dose calculations before maximum dose point and after dose maximum point enhanced 187.8±0.68% (6MV and 184.6±0.65% (6MV, 156±0.43% (18MV and 153±0.37% (18MV, respectively, and the efficiency of MC calculations for lateral dose profile remarkably on the central beam axis and across the treatment field raised in turn 197±0.66% (6MV and 214.6±0.73% (6MV, 175±0.36% (18MV and 181.4±0.45% (18MV. Conclusion: Applying dose variance reduction technique of DBS for modeling Oncor® linac with using BEAMnrc MC Code surprisingly improved the fluence of electron and photon, and it therefore enhanced the efficiency of dose variance reduction for MC calculations. As a result, running DBS in different kinds of MC simulation Codes might be beneficent in reducing the uncertainty of MC calculations. 

  9. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    Science.gov (United States)

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  10. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  11. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  12. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  13. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  14. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  15. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per

    2012-01-01

    was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. CONCLUSIONS: The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances......BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...

  16. Maximum Diameter Measurements of Aortic Aneurysms on Axial CT Images After Endovascular Aneurysm Repair: Sufficient for Follow-up?

    International Nuclear Information System (INIS)

    Baumueller, Stephan; Nguyen, Thi Dan Linh; Goetti, Robert Paul; Lachat, Mario; Seifert, Burkhardt; Pfammatter, Thomas; Frauenfelder, Thomas

    2011-01-01

    Purpose: To assess the accuracy of maximum diameter measurements of aortic aneurysms after endovascular aneurysm repair (EVAR) on axial computed tomographic (CT) images in comparison to maximum diameter measurements perpendicular to the intravascular centerline for follow-up by using three-dimensional (3D) volume measurements as the reference standard. Materials and Methods: Forty-nine consecutive patients (73 ± 7.5 years, range 51–88 years), who underwent EVAR of an infrarenal aortic aneurysm were retrospectively included. Two blinded readers twice independently measured the maximum aneurysm diameter on axial CT images performed at discharge, and at 1 and 2 years after intervention. The maximum diameter perpendicular to the centerline was automatically measured. Volumes of the aortic aneurysms were calculated by dedicated semiautomated 3D segmentation software (3surgery, 3mensio, the Netherlands). Changes in diameter of 0.5 cm and in volume of 10% were considered clinically significant. Intra- and interobserver agreements were calculated by intraclass correlations (ICC) in a random effects analysis of variance. The two unidimensional measurement methods were correlated to the reference standard. Results: Intra- and interobserver agreements for maximum aneurysm diameter measurements were excellent (ICC = 0.98 and ICC = 0.96, respectively). There was an excellent correlation between maximum aneurysm diameters measured on axial CT images and 3D volume measurements (r = 0.93, P < 0.001) as well as between maximum diameter measurements perpendicular to the centerline and 3D volume measurements (r = 0.93, P < 0.001). Conclusion: Measurements of maximum aneurysm diameters on axial CT images are an accurate, reliable, and robust method for follow-up after EVAR and can be used in daily routine.

  17. A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.

    1994-01-01

    A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented

  18. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  19. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  20. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis: a proposal for standardisation

    International Nuclear Information System (INIS)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; Høilund-Carlsen, Poul Flemming

    2016-01-01

    Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertainty of repeated measurements and obtain relevant repeatability coefficients (RCs) which have a unique relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed twice by the same observer to study intra-observer agreement. In study 2, 14 patients with glioma were scanned up to five times. Resulting 49 scans were assessed by three observers to examine inter-observer agreement. Outcome variables were SUVmax in study 1 and cerebral total hemispheric glycolysis (THG) in study 2. In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter-observer differences were negligible compared to differences owing to other factors; between observer 1 and 2: −10 (95 % CI: −352 to 332) and between observer 1 vs 3: 28 (95 % CI: −313 to 370). VCA is an appealing approach for weighing different sources of variation against each other, summarised as RCs. The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components. The online version of this article (doi:10

  1. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  2. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  3. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  4. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  5. An apparent contradiction: increasing variability to achieve greater precision?

    Science.gov (United States)

    Rosenblatt, Noah J; Hurt, Christopher P; Latash, Mark L; Grabiner, Mark D

    2014-02-01

    To understand the relationship between variability of foot placement in the frontal plane and stability of gait patterns, we explored how constraining mediolateral foot placement during walking affects the structure of kinematic variance in the lower-limb configuration space during the swing phase of gait. Ten young subjects walked under three conditions: (1) unconstrained (normal walking), (2) constrained (walking overground with visual guides for foot placement to achieve the measured unconstrained step width) and, (3) beam (walking on elevated beams spaced to achieve the measured unconstrained step width). The uncontrolled manifold analysis of the joint configuration variance was used to quantify two variance components, one that did not affect the mediolateral trajectory of the foot in the frontal plane ("good variance") and one that affected this trajectory ("bad variance"). Based on recent studies, we hypothesized that across conditions (1) the index of the synergy stabilizing the mediolateral trajectory of the foot (the normalized difference between the "good variance" and "bad variance") would systematically increase and (2) the changes in the synergy index would be associated with a disproportionate increase in the "good variance." Both hypotheses were confirmed. We conclude that an increase in the "good variance" component of the joint configuration variance may be an effective method of ensuring high stability of gait patterns during conditions requiring increased control of foot placement, particularly if a postural threat is present. Ultimately, designing interventions that encourage a larger amount of "good variance" may be a promising method of improving stability of gait patterns in populations such as older adults and neurological patients.

  6. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  7. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  8. Maximum spreading of liquid drop on various substrates with different wettabilities

    Science.gov (United States)

    Choudhury, Raihan; Choi, Junho; Yang, Sangsun; Kim, Yong-Jin; Lee, Donggeun

    2017-09-01

    This paper describes a novel model developed for a priori prediction of the maximal spread of a liquid drop on a surface. As a first step, a series of experiments were conducted under precise control of the initial drop diameter, its falling height, roughness, and wettability of dry surfaces. The transient liquid spreading was recorded by a high-speed camera to obtain its maximum spreading under various conditions. Eight preexisting models were tested for accurate prediction of the maximum spread; however, most of the model predictions were not satisfactory except one, in comparison with our experimental data. A comparative scaling analysis of the literature models was conducted to elucidate the condition-dependent prediction characteristics of the models. The conditioned bias in the predictions was mainly attributed to the inappropriate formulations of viscous dissipation or interfacial energy of liquid on the surface. Hence, a novel model based on energy balance during liquid impact was developed to overcome the limitations of the previous models. As a result, the present model was quite successful in predicting the liquid spread in all the conditions.

  9. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  10. Constraints on pulsar masses from the maximum observed glitch

    Science.gov (United States)

    Pizzochero, P. M.; Antonelli, M.; Haskell, B.; Seveso, S.

    2017-07-01

    Neutron stars are unique cosmic laboratories in which fundamental physics can be probed in extreme conditions not accessible to terrestrial experiments. In particular, the precise timing of rotating magnetized neutron stars (pulsars) reveals sudden jumps in rotational frequency in these otherwise steadily spinning-down objects. These 'glitches' are thought to be due to the presence of a superfluid component in the star, and offer a unique glimpse into the interior physics of neutron stars. In this paper we propose an innovative method to constrain the mass of glitching pulsars, using observations of the maximum glitch observed in a star, together with state-of-the-art microphysical models of the pinning interaction between superfluid vortices and ions in the crust. We study the properties of a physically consistent angular momentum reservoir of pinned vorticity, and we find a general inverse relation between the size of the maximum glitch and the pulsar mass. We are then able to estimate the mass of all the observed glitchers that have displayed at least two large events. Our procedure will allow current and future observations of glitching pulsars to constrain not only the physics of glitch models but also the superfluid properties of dense hadronic matter in neutron star interiors.

  11. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  12. Variance estimates for transport in stochastic media by means of the master equation

    International Nuclear Information System (INIS)

    Pautz, S. D.; Franke, B. C.; Prinja, A. K.

    2013-01-01

    The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models. (authors)

  13. A fast and precise chemiluminescence ozone detector for eddy flux and airborne application

    Directory of Open Access Journals (Sweden)

    A. Zahn

    2012-02-01

    Full Text Available A commercially available dry chemiluminescence (CI instrument for fast and precise measurement of ozone (O3 is specified. The sensitivity is ~9000 counts s−1 per ppbv of ozone. Its precision is entirely determined by the number of photons reaching the detector (being a photomultiplier, i.e. is quantum-noise limited. The relative precision (ΔO3/O3 in % thus follows Poisson statistics and scales with the square root of the measurement frequency f and with the inverse O3 mixing ratio: ΔO3/O3f0.5 · O3−0.5. At typical O3 mixing ratios between 10 and 100 ppbv (and 1 bar, the precision is 0.3–1.0% at f = 10 Hz. The maximum measurement frequency is 50 Hz. The mechanical and electronic set-up as well as the instrument performance is described. Recommendations on the adequate inlet tube configuration (inlet tube length, sampling flow and on the way of calibration at stationary ground-based platforms and onboard aircraft are given.

  14. Spectrum unfolding in X-ray spectrometry using the maximum entropy method

    International Nuclear Information System (INIS)

    Fernandez, Jorge E.; Scot, Viviana; Di Giulio, Eugenio

    2014-01-01

    The solution of the unfolding problem is an ever-present issue in X-ray spectrometry. The maximum entropy technique solves this problem by taking advantage of some known a priori physical information and by ensuring an outcome with only positive values. This method is implemented in MAXED (MAXimum Entropy Deconvolution), a software code contained in the package UMG (Unfolding with MAXED and GRAVEL) developed at PTB and distributed by NEA Data Bank. This package contains also the code GRAVEL (used to estimate the precision of the solution). This article introduces the new code UMESTRAT (Unfolding Maximum Entropy STRATegy) which applies a semi-automatic strategy to solve the unfolding problem by using a suitable combination of MAXED and GRAVEL for applications in X-ray spectrometry. Some examples of the use of UMESTRAT are shown, demonstrating its capability to remove detector artifacts from the measured spectrum consistently with the model used for the detector response function (DRF). - Highlights: ► A new strategy to solve the unfolding problem in X-ray spectrometry is presented. ► The presented strategy uses a suitable combination of the codes MAXED and GRAVEL. ► The applied strategy provides additional information on the Detector Response Function. ► The code UMESTRAT is developed to apply this new strategy in a semi-automatic mode

  15. Field Plot Techniques for Black Sigatoka Evaluation in East African Highland Bananas

    Directory of Open Access Journals (Sweden)

    Okoro, JU.

    1997-01-01

    Full Text Available Number of plants per experimental unit and number of replications for the efficient and precise assessment of black sigatoka leaf spot disease caused by Mycosphaerella fijiensis in East African Highland bananas were determined. Two representative cultivars were used. Host response to black sigatoka infection was measured by recording the youngest leaf with necrotic spots. The number of plants per experimental unit was determined, using the methods of maximum curvature and comparison of variances, while the number of replications was estimated by Hatheway's method. The optimum experimental plot size was 3 plants (18 m2 for the beer banana cultivar 'Igitsiri', and 30 plants (180 m2 for the cooking banana cultivar 'Igisahira Gisanzwe', using the comparison of variances method. However, the optimum plot size was 15 plants (90 m2 for both cultivars using the method of maximum curvature. The latter statistical method was preferred because of the low precision of the estimates in the former method. Unreplicated trials with plots of 15 plants could be adequate to assess black sigatoka response in East African bananas if uniform disease pressure exists.

  16. A Fixed-Precision Sequential Sampling Plan for the Potato Tuberworm Moth, Phthorimaea operculella Zeller (Lepidoptera: Gelechidae), on Potato Cultivars.

    Science.gov (United States)

    Shahbi, M; Rajabpour, A

    2017-08-01

    Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.

  17. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  18. Practical precision measurement

    International Nuclear Information System (INIS)

    Kwak, Ho Chan; Lee, Hui Jun

    1999-01-01

    This book introduces basic knowledge of precision measurement, measurement of length, precision measurement of minor diameter, measurement of angles, measurement of surface roughness, three dimensional measurement, measurement of locations and shapes, measurement of screw, gear testing, cutting tools testing, rolling bearing testing, and measurement of digitalisation. It covers height gauge, how to test surface roughness, measurement of plan and straightness, external and internal thread testing, gear tooth measurement, milling cutter, tab, rotation precision measurement, and optical transducer.

  19. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  20. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  1. Genetic and environmental variance in content dimensions of the MMPI.

    Science.gov (United States)

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  2. Relation between Peak Power Output in Sprint Cycling and Maximum Voluntary Isometric Torque Production.

    Science.gov (United States)

    Kordi, Mehdi; Goodall, Stuart; Barratt, Paul; Rowley, Nicola; Leeder, Jonathan; Howatson, Glyn

    2017-08-01

    From a cycling paradigm, little has been done to understand the relationships between maximal isometric strength of different single joint lower body muscle groups and their relation with, and ability to predict PPO and how they compare to an isometric cycling specific task. The aim of this study was to establish relationships between maximal voluntary torque production from isometric single-joint and cycling specific tasks and assess their ability to predict PPO. Twenty male trained cyclists participated in this study. Peak torque was measured by performing maximum voluntary contractions (MVC) of knee extensors, knee flexors, dorsi flexors and hip extensors whilst instrumented cranks measured isometric peak torque from MVC when participants were in their cycling specific position (ISOCYC). A stepwise regression showed that peak torque of the knee extensors was the only significant predictor of PPO when using SJD and accounted for 47% of the variance. However, when compared to ISOCYC, the only significant predictor of PPO was ISOCYC, which accounted for 77% of the variance. This suggests that peak torque of the knee extensors was the best single-joint predictor of PPO in sprint cycling. Furthermore, a stronger prediction can be made from a task specific isometric task. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  4. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.

  5. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  6. Unique electron polarimeter analyzing power comparison and precision spin-based energy measurement

    International Nuclear Information System (INIS)

    Joseph Grames; Charles Sinclair; Joseph Mitchell; Eugene Chudakov; Howard Fenker; Arne Freyberger; Douglas Higinbotham; Poelker, B.; Michael Steigerwald; Michael Tiefenback; Christian Cavata; Stephanie Escoffier; Frederic Marie; Thierry Pussieux; Pascal Vernin; Samuel Danagoulian; Kahanawita Dharmawardane; Renee Fatemi; Kyungseon Joo; Markus Zeier; Viktor Gorbenko; Rakhsha Nasseripour; Brian Raue; Riad Suleiman; Benedikt Zihlmann

    2004-01-01

    Precision measurements of the relative analyzing powers of five electron beam polarimeters, based on Compton, Moller, and Mott scattering, have been performed using the CEBAF accelerator at the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory). A Wien filter in the 100 keV beamline of the injector was used to vary the electron spin orientation exiting the injector. High statistical precision measurements of the scattering asymmetry as a function of the spin orientation were made with each polarimeter. Since each polarimeter receives beam with the same magnitude of polarization, these asymmetry measurements permit a high statistical precision comparison of the relative analyzing powers of the five polarimeters. This is the first time a precise comparison of the analyzing powers of Compton, Moller, and Mott scattering polarimeters has been made. Statistically significant disagreements among the values of the beam polarization calculated from the asymmetry measurements made with each polarimeter reveal either errors in the values of the analyzing power, or failure to correctly include all systematic effects. The measurements reported here represent a first step toward understanding the systematic effects of these electron polarimeters. Such studies are necessary to realize high absolute accuracy (ca. 1%) electron polarization measurements, as required for some parity violation measurements planned at Jefferson Laboratory. Finally, a comparison of the value of the spin orientation exiting the injector that provides maximum longitudinal polarization in each experimental hall leads to an independent and very precise (better than 10-4) absolute measurement of the final electron beam energy

  7. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  8. Relative contributions of sampling effort, measuring, and weighing to precision of larval sea lamprey biomass estimates

    Science.gov (United States)

    Slade, Jeffrey W.; Adams, Jean V.; Cuddy, Douglas W.; Neave, Fraser B.; Sullivan, W. Paul; Young, Robert J.; Fodale, Michael F.; Jones, Michael L.

    2003-01-01

    We developed two weight-length models from 231 populations of larval sea lampreys (Petromyzon marinus) collected from tributaries of the Great Lakes: Lake Ontario (21), Lake Erie (6), Lake Huron (67), Lake Michigan (76), and Lake Superior (61). Both models were mixed models, which used population as a random effect and additional environmental factors as fixed effects. We resampled weights and lengths 1,000 times from data collected in each of 14 other populations not used to develop the models, obtaining a weight and length distribution from reach resampling. To test model performance, we applied the two weight-length models to the resampled length distributions and calculated the predicted mean weights. We also calculated the observed mean weight for each resampling and for each of the original 14 data sets. When the average of predicted means was compared to means from the original data in each stream, inclusion of environmental factors did not consistently improve the performance of the weight-length model. We estimated the variance associated with measures of abundance and mean weight for each of the 14 selected populations and determined that a conservative estimate of the proportional contribution to variance associated with estimating abundance accounted for 32% to 95% of the variance (mean = 66%). Variability in the biomass estimate appears more affected by variability in estimating abundance than in converting length to weight. Hence, efforts to improve the precision of biomass estimates would be aided most by reducing the variability associated with estimating abundance.

  9. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  10. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  11. On Mean-Variance Hedging of Bond Options with Stochastic Risk Premium Factor

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Kumar, Suresh K.

    2014-01-01

    We consider the mean-variance hedging problem for pricing bond options using the yield curve as the observation. The model considered contains infinite-dimensional noise sources with the stochastically- varying risk premium. Hence our model is incomplete. We consider mean-variance hedging under the

  12. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    Science.gov (United States)

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  13. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  14. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  15. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  16. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  17. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  18. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  19. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  20. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  1. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  2. Setting the renormalization scale in QCD: The principle of maximum conformality

    DEFF Research Database (Denmark)

    Brodsky, S. J.; Di Giustino, L.

    2012-01-01

    A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when the renormali......A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when...... the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...

  3. Improvement in precision and trueness of quantitative XRF analysis with glass-bead method. 1

    International Nuclear Information System (INIS)

    Yamamoto, Yasuyuki; Ogasawara, Noriko; Yuhara, Yoshitaroh; Yokoyama, Yuichi

    1995-01-01

    The factors which lower the precisions of simultaneous X-ray Fluorescence (XRF) spectrometer were investigated. Especially in quantitative analyses of oxide powders with glass-bead method, X-ray optical characteristics of the equipment affects the precision of the X-ray intensities. In focused (curved) crystal spectrometers, the precision depends on the deviation of the actual size and position of the crystals from those of theoretical designs, thus the precision differs for each crystal for each element. When the deviation is large, a dispersion of the measured X-ray intensities is larger than the statistical dispersion, even though the intensity itself keeps unchanged. Moreover, a waviness of the surface of glass-beads makes the difference of the height of an analyzed surface from that of the designed one. This difference makes the change of the amount of the X-ray incident on the analyzing crystal and makes the dispersion of the X-ray intensity larger. Considering these factors, a level of the waviness must be regulated to improve the precision under exsisting XRF equipments. In this study, measurement precisions of 4 simultaneous XRF spectrometers were evaluated, and the element lead (Pb-Lβ1) was found to have the lowest precision. Relative standard deviation (RSD) of the measurements of 10 glass-beads for the same powder sample was 0.3% without the regulation of the waviness of analytical surface. With mechanical flattening of the glass-bead surface, the level of waviness, which is the maximum difference of the heights in a glass-bead, was regulated as under 30 μm, RSD was 0.038%, which is almost comparable to the statistical RSD 0.033%. (author)

  4. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    International Nuclear Information System (INIS)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)

  5. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  6. Precision of synesthetic color matching resembles that for recollected colors rather than physical colors.

    Science.gov (United States)

    Arnold, Derek H; Wegener, Signy V; Brown, Francesca; Mattingley, Jason B

    2012-10-01

    Grapheme-color synesthesia is an atypical condition in which individuals experience sensations of color when reading printed graphemes such as letters and digits. For some grapheme-color synesthetes, seeing a printed grapheme triggers a sensation of color, but hearing the name of a grapheme does not. This dissociation allowed us to compare the precision with which synesthetes are able to match their color experiences triggered by visible graphemes, with the precision of their matches for recalled colors based on the same graphemes spoken aloud. In six synesthetes, color matching for printed graphemes was equally variable relative to recalled experiences. In a control experiment, synesthetes and age-matched controls either matched the color of a circular patch while it was visible on a screen, or they judged its color from memory after it had disappeared. Both synesthetes and controls were more variable when matching from memory, and the variance of synesthetes' recalled color judgments matched that associated with their synesthetic judgments for visible graphemes in the first experiment. Results suggest that synesthetic experiences of color triggered by achromatic graphemes are analogous to recollections of color.

  7. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    Science.gov (United States)

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  8. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  9. The role of respondents’ comfort for variance in stated choice surveys

    DEFF Research Database (Denmark)

    Emang, Diana; Lundhede, Thomas; Thorsen, Bo Jellesmark

    2017-01-01

    they complete surveys correlates with the error variance in stated choice models of their responses. Comfort-related variables are included in the scale functions of the scaled multinomial logit models. The hypothesis was that higher comfort reduces error variance in answers, as revealed by a higher scale...... parameter and vice versa. Information on, e.g., sleep and time since eating (higher comfort) correlated with scale heterogeneity, and produced lower error variance when controlled for in the model. That respondents’ comfort may influence choice behavior suggests that knowledge of the respondents’ activity......Preference elicitation among outdoor recreational users is subject to measurement errors that depend, in part, on survey planning. This study uses data from a choice experiment survey on recreational SCUBA diving to investigate whether self-reported information on respondents’ comfort when...

  10. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    International Nuclear Information System (INIS)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs

  11. Fluctuations in atomic collision cascades - variance and correlations in sputtering and defect distributions

    Energy Technology Data Exchange (ETDEWEB)

    Chakarova, R.; Pazsit, I.

    1997-01-01

    Fluctuation phenomena are investigated in various collision processes, i.e. ion bombardment induced sputtering and defect creation. The mean and variance of the sputter yield and the vacancies and interstitials are calculated as functions of the ion energy and the ion-target mass ratio. It is found that the relative variance of the defects in half-spaces and the relative variance of the sputter yield are not monotonous functions of the mass ratio. Two-point correlation functions in the depth variable, as well as sputtered energy, are also calculated. These functions help interpreting the behaviour of the relative variances of the integrated quantities, as well as understanding the cascade dynamics. All calculations are based on Lindhard power-law cross sections and use a binary collision Monte Carlo algorithm. 30 refs, 25 figs.

  12. On discrete stochastic processes with long-lasting time dependence in the variance

    Science.gov (United States)

    Queirós, S. M. D.

    2008-11-01

    In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.

  13. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  14. On the Computation of the RMSEA and CFI from the Mean-And-Variance Corrected Test Statistic with Nonnormal Data in SEM.

    Science.gov (United States)

    Savalei, Victoria

    2018-01-01

    A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.

  15. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  16. Analysis of force variance for a continuous miner drum using the Design of Experiments method

    Energy Technology Data Exchange (ETDEWEB)

    S. Somanchi; V.J. Kecojevic; C.J. Bise [Pennsylvania State University, University Park, PA (United States)

    2006-06-15

    Continuous miners (CMs) are excavating machines designed to extract a variety of minerals by underground mining. The variance in force experienced by the cutting drum is a very important aspect that must be considered during drum design. A uniform variance essentially means that an equal load is applied on the individual cutting bits and this, in turn, enables better cutting action, greater efficiency, and longer bit and machine life. There are certain input parameters used in the drum design whose exact relationships with force variance are not clearly understood. This paper determines (1) the factors that have a significant effect on the force variance of the drum and (2) the values that can be assigned to these factors to minimize the force variance. A computer program, Continuous Miner Drum (CMD), was developed in collaboration with Kennametal, Inc. to facilitate the mechanical design of CM drums. CMD also facilitated data collection for determining significant factors affecting force variance. Six input parameters, including centre pitch, outer pitch, balance angle, shift angle, set angle and relative angle were tested at two levels. Trials were configured using the Design of Experiments (DoE) method where 2{sup 6} full-factorial experimental design was selected to investigate the effect of these factors on force variance. Results from the analysis show that all parameters except balance angle, as well as their interactions, significantly affect the force variance.

  17. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...

  18. Optimal control of LQG problem with an explicit trade-off between mean and variance

    Science.gov (United States)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  19. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  20. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  1. AN ADAPTIVE OPTIMAL KALMAN FILTER FOR STOCHASTIC VIBRATION CONTROL SYSTEM WITH UNKNOWN NOISE VARIANCES

    Institute of Scientific and Technical Information of China (English)

    Li Shu; Zhuo Jiashou; Ren Qingwen

    2000-01-01

    In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.

  2. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  3. Decomposing variation in male reproductive success: age-specific variances and covariances through extra-pair and within-pair reproduction.

    Science.gov (United States)

    Lebigre, Christophe; Arcese, Peter; Reid, Jane M

    2013-07-01

    Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased

  4. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  5. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  6. Novel Methods to Enhance Precision and Reliability in Muscle Synergy Identification during Walking

    Science.gov (United States)

    Kim, Yushin; Bulea, Thomas C.; Damiano, Diane L.

    2016-01-01

    Muscle synergies are hypothesized to reflect modular control of muscle groups via descending commands sent through multiple neural pathways. Recently, the number of synergies has been reported as a functionally relevant indicator of motor control complexity in individuals with neurological movement disorders. Yet the number of synergies extracted during a given activity, e.g., gait, varies within and across studies, even for unimpaired individuals. With no standardized methods for precise determination, this variability remains unexplained making comparisons across studies and cohorts difficult. Here, we utilize k-means clustering and intra-class and between-level correlation coefficients to precisely discriminate reliable from unreliable synergies. Electromyography (EMG) was recorded bilaterally from eight leg muscles during treadmill walking at self-selected speed. Muscle synergies were extracted from 20 consecutive gait cycles using non-negative matrix factorization. We demonstrate that the number of synergies is highly dependent on the threshold when using the variance accounted for by reconstructed EMG. Beyond use of threshold, our method utilized a quantitative metric to reliably identify four or five synergies underpinning walking in unimpaired adults and revealed synergies having poor reproducibility that should not be considered as true synergies. We show that robust and unreliable synergies emerge similarly, emphasizing the need for careful analysis in those with pathology. PMID:27695403

  7. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  8. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  9. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  10. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  11. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  12. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  13. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  14. Design and implementation of high-precision and low-jitter programmable delay circuitry

    International Nuclear Information System (INIS)

    Gao Yuan; Cui Ke; Zhang Hongfei; Luo Chunli; Yang Dongxu; Liang Hao; Wang Jian

    2011-01-01

    A programmable delay circuit design which has characteristics of high-precision, low-jitter, wide-programmable-range and low power is introduced. The delay circuitry uses the scheme which has two parts: the coarse delay and the fine delay that could be controlled separately. Using different coarse delay chip can reach different maximum programmable range. And the fine delay programmable chip has the minimum step which is down to 10 ps. The whole circuitry jitter will be less than 100 ps. The design has been successfully applied in Quantum Key Distribution experiment. (authors)

  15. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  16. Precision of dual energy X-ray absorptiometry for body composition measurements in cats; Precisao da tecnica de absorciometria de raios-X de dupla energia na determinacao da composicao corporal em gatos

    Energy Technology Data Exchange (ETDEWEB)

    Borges, N.C. [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Escola de Veterinaria]. E-mail: naida@vet.ufg.br; Vasconcellos, R.S.; Canola, J.C.; Carciofi, A.C.; Pereira, G.T. [UNESP, Jaboticabal, SP (Brazil). Faculdade de Ciencias Agrarias e Veterinarias; Paula, F.J.A. [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Faculdade de Medicina

    2008-07-01

    A short-term precision error of the individual subject and the DEXA technique, such as the effect of the repositioning of the cat on the examination table, were established. Four neutered adult cats (BW=4342 g) and three females (BW=3459 g) were submitted to five repeated scans with and without repositioning between them. Precision was estimated from the mean of the five measurements and expressed by the individual coefficient of variation (CV). The precision error of the technique was estimated by the variance of scan pool (n=35) and expressed in CV for the technique (CVt). The degrees of freedom and confidence intervals were determined to avoid underestimation of precision errors. Bone mineral content (BMC), lean mass (LM), and fat mass (FM) averages were higher (P<0.05) when animals were repositioned. The CVt was significantly higher (P<0.05) for bone mineral density (BMD), LM, and FM when the animals were repositioned. For short-term precision measurements, the repositioning of the animal was important to establish the precision of the technique. The dual energy xray absorptiometry method provided precision for body composition measurements in adult cats. (author)

  17. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    Science.gov (United States)

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  18. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  19. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  20. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  1. [Precision and personalized medicine].

    Science.gov (United States)

    Sipka, Sándor

    2016-10-01

    The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.

  2. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  3. Comparing maximum intercuspal contacts of virtual dental patients and mounted dental casts.

    Science.gov (United States)

    Delong, Ralph; Ko, Ching-Chang; Anderson, Gary C; Hodges, James S; Douglas, W H

    2002-12-01

    Quantitative measures of occlusal contacts are of paramount importance in the study of chewing dysfunction. A tool is needed to identify and quantify occlusal parameters without occlusal interference caused by the technique of analysis. This laboratory simulation study compared occlusal contacts constructed from 3-dimensional images of dental casts and interocclusal records with contacts found by use of conventional methods. Dental casts of 10 completely dentate adults were mounted in a semi-adjustable Denar articulator. Maximum intercuspal contacts were marked on the casts using red film. Intercuspal records made with an experimental vinyl polysiloxane impression material recorded maximum intercuspation. Three-dimensional virtual models of the casts and interocclusal records were made using custom software and an optical scanner. Contacts were calculated between virtual casts aligned manually (CM), aligned with interocclusal records scanned seated on the mandibular casts (C1) or scanned independently (C2), and directly from virtual interocclusal records (IR). Sensitivity and specificity calculations used the marked contacts as the standard. Contact parameters were compared between method pairs. Statistical comparisons used analysis of variance and the Tukey-Kramer post hoc test (P= CM/C1 = CM/C2 > C2/IR > CM/IR > C1/IR, where ">" means "closer than." Within the limits of this study, occlusal contacts calculated from aligned virtual casts accurately reproduce articulator contacts.

  4. Within-category variance and lexical tone discrimination in native and non-native speakers

    NARCIS (Netherlands)

    Hoffmann, C.W.G.; Sadakata, M.; Chen, A.; Desain, P.W.M.; McQueen, J.M.; Gussenhove, C.; Chen, Y.; Dediu, D.

    2014-01-01

    In this paper, we show how acoustic variance within lexical tones in disyllabic Mandarin Chinese pseudowords affects discrimination abilities in both native and non-native speakers of Mandarin Chinese. Within-category acoustic variance did not hinder native speakers in discriminating between lexical

  5. Development of 3-axis precise positioning seismic physical modeling system in the simulation of marine seismic exploration

    Science.gov (United States)

    Kim, D.; Shin, S.; Ha, J.; Lee, D.; Lim, Y.; Chung, W.

    2017-12-01

    Seismic physical modeling is a laboratory-scale experiment that deals with the actual and physical phenomena that may occur in the field. In seismic physical modeling, field conditions are downscaled and used. For this reason, even a small error may lead to a big error in an actual field. Accordingly, the positions of the source and the receiver must be precisely controlled in scale modeling. In this study, we have developed a seismic physical modeling system capable of precisely controlling the 3-axis position. For automatic and precise position control of an ultrasonic transducer(source and receiver) in the directions of the three axes(x, y, and z), a motor was mounted on each of the three axes. The motor can automatically and precisely control the positions with positional precision of 2''; for the x and y axes and 0.05 mm for the z axis. As it can automatically and precisely control the positions in the directions of the three axes, it has an advantage in that simulations can be carried out using the latest exploration techniques, such as OBS and Broadband Seismic. For the signal generation section, a waveform generator that can produce a maximum of two sources was used, and for the data acquisition section, which receives and stores reflected signals, an A/D converter that can receive a maximum of four signals was used. As multiple sources and receivers could be used at the same time, the system was set up in such a way that diverse exploration methods, such as single channel, multichannel, and 3-D exploration, could be realized. A computer control program based on LabVIEW was created, so that it could control the position of the transducer, determine the data acquisition parameters, and check the exploration data and progress in real time. A marine environment was simulated using a water tank 1 m wide, 1 m long, and 0.9 m high. To evaluate the performance and applicability of the seismic physical modeling system developed in this study, single channel and

  6. The newest precision measurement

    International Nuclear Information System (INIS)

    Lee, Jing Gu; Lee, Jong Dae

    1974-05-01

    This book introduces basic of precision measurement, measurement of length, limit gauge, measurement of angles, measurement of surface roughness, measurement of shapes and locations, measurement of outline, measurement of external and internal thread, gear testing, accuracy inspection of machine tools, three dimension coordinate measuring machine, digitalisation of precision measurement, automation of precision measurement, measurement of cutting tools, measurement using laser, and point of choosing length measuring instrument.

  7. Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data

    Science.gov (United States)

    Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.

    The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.

  8. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  9. VARIANCE COMPONENTS AND SELECTION FOR FEATHER PECKING BEHAVIOR IN LAYING HENS

    OpenAIRE

    Su, Guosheng; Kjaer, Jørgen B.; Sørensen, Poul

    2005-01-01

    Variance components and selection response for feather pecking behaviour were studied by analysing the data from a divergent selection experiment. An investigation show that a Box-Cox transformation with power =-0.2 made the data be approximately normally distributed and fit best by the given model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the two traits in both low feather peckin...

  10. Asymptotics of variance of the lattice point count

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    2008-01-01

    Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008

  11. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  12. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  13. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  14. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  15. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  16. Motor equivalence and structure of variance: multi-muscle postural synergies in Parkinson's disease.

    Science.gov (United States)

    Falaki, Ali; Huang, Xuemei; Lewis, Mechelle M; Latash, Mark L

    2017-07-01

    We explored posture-stabilizing multi-muscle synergies with two methods of analysis of multi-element, abundant systems: (1) Analysis of inter-cycle variance; and (2) Analysis of motor equivalence, both quantified within the framework of the uncontrolled manifold (UCM) hypothesis. Data collected in two earlier studies of patients with Parkinson's disease (PD) were re-analyzed. One study compared synergies in the space of muscle modes (muscle groups with parallel scaling of activation) during tasks performed by early-stage PD patients and controls. The other study explored the effects of dopaminergic medication on multi-muscle-mode synergies. Inter-cycle variance and absolute magnitude of the center of pressure displacement across consecutive cycles were quantified during voluntary whole-body sway within the UCM and orthogonal to the UCM space. The patients showed smaller indices of variance within the UCM and motor equivalence compared to controls. The indices were also smaller in the off-drug compared to on-drug condition. There were strong across-subject correlations between the inter-cycle variance within/orthogonal to the UCM and motor equivalent/non-motor equivalent displacements. This study has shown that, at least for cyclical tasks, analysis of variance and analysis of motor equivalence lead to metrics of stability that correlate with each other and show similar effects of disease and medication. These results show, for the first time, intimate links between indices of variance and motor equivalence. They suggest that analysis of motor equivalence, which requires only a handful of trials, could be used broadly in the field of motor disorders to analyze problems with action stability.

  17. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    International Nuclear Information System (INIS)

    Song Ningfang; Yuan Rui; Jin Jing

    2011-01-01

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 0 /h 2 , K = 1.1714exp-3 0 /h 1.5 , B = 1.3185exp-3 0 /h, N = 5.982exp-4 0 /h 0.5 and Q = 5.197exp-7 0 in real time, and tracks degradation of gyro performance from initail values, R = 0.651 0 /h 2 , K = 0.801 0 /h 1.5 , B = 0.385 0 /h, N = 0.0874 0 /h 0.5 and Q = 8.085exp-5 0 , to final estimations, R = 9.548 0 /h 2 , K = 9.524 0 /h 1.5 , B = 2.234 0 /h, N = 0.5594 0 /h 0.5 and Q = 5.113exp-4 0 , due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  18. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  19. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework

  20. Sex versus asex: An analysis of the role of variance conversion.

    Science.gov (United States)

    Lewis-Pye, Andrew E M; Montalbán, Antonio

    2017-04-01

    The question as to why most complex organisms reproduce sexually remains a very active research area in evolutionary biology. Theories dating back to Weismann have suggested that the key may lie in the creation of increased variability in offspring, causing enhanced response to selection. Under appropriate conditions, selection is known to result in the generation of negative linkage disequilibrium, with the effect of recombination then being to increase genetic variance by reducing these negative associations between alleles. It has therefore been a matter of significant interest to understand precisely those conditions resulting in negative linkage disequilibrium, and to recognise also the conditions in which the corresponding increase in genetic variation will be advantageous. Here, we prove rigorous results for the multi-locus case, detailing the build up of negative linkage disequilibrium, and describing the long term effect on population fitness for models with and without bounds on fitness contributions from individual alleles. Under the assumption of large but finite bounds on fitness contributions from alleles, the non-linear nature of the effect of recombination on a population presents serious obstacles in finding the genetic composition of populations at equilibrium, and in establishing convergence to those equilibria. We describe techniques for analysing the long term behaviour of sexual and asexual populations for such models, and use these techniques to establish conditions resulting in higher fitnesses for sexually reproducing populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    Directory of Open Access Journals (Sweden)

    Andrey eStepanyuk

    2014-10-01

    Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.

  2. Variance of measurements from a calibration function derived from data which exhibit run-to-run differences

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1985-01-01

    The volume of liquid in a nuclear process tank is determined from a calibration equation which expresses volume as a function of liquid level. Successive calibration runs are made to obtain data from which to estimate either the calibration function or its inverse. For tanks equipped with high-precision measurement systems to determine liquid level, it frequently happens that run-to-run differences due to uncontrolled or uncontrollable ambient conditions are large relative to within-run measurement errors. In the strict sense, a calibration function cannot be developed from data which exhibit significant run-to-run differences. In practice, run-to-run differences are ignored when they are small relative to the accuracy required for measurements of the tank's contents. The use of standard statistical techniques in this situation can result in variance estimates which severely underestimate the actual uncertainty in volume measurements. This paper gives a method whereby reasonable estimates of the calibration uncertainty in volume determinations can be obtained in the presence of statistically significant run-to-run variability. 4 references, 3 figures, 1 table

  3. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...

  4. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  5. Exploring the Relationship of Task Performance and Physical and Cognitive Fatigue During a Daylong Light Precision Task.

    Science.gov (United States)

    Yung, Marcus; Manji, Rahim; Wells, Richard P

    2017-11-01

    Our aim was to explore the relationship between fatigue and operation system performance during a simulated light precision task over an 8-hr period using a battery of physical (central and peripheral) and cognitive measures. Fatigue may play an important role in the relationship between poor ergonomics and deficits in quality and productivity. However, well-controlled laboratory studies in this area have several limitations, including the lack of work relevance of fatigue exposures and lack of both physical and cognitive measures. There remains a need to understand the relationship between physical and cognitive fatigue and task performance at exposure levels relevant to realistic production or light precision work. Errors and fatigue measures were tracked over the course of a micropipetting task. Fatigue responses from 10 measures and errors in pipetting technique, precision, and targeting were submitted to principal component analysis to descriptively analyze features and patterns. Fatigue responses and error rates contributed to three principal components (PCs), accounting for 50.9% of total variance. Fatigue responses grouped within the three PCs reflected central and peripheral upper extremity fatigue, postural sway, and changes in oculomotor behavior. In an 8-hr light precision task, error rates shared similar patterns to both physical and cognitive fatigue responses, and/or increases in arousal level. The findings provide insight toward the relationship between fatigue and operation system performance (e.g., errors). This study contributes to a body of literature documenting task errors and fatigue, reflecting physical (both central and peripheral) and cognitive processes.

  6. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  7. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    Science.gov (United States)

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  8. Picosecond-precision multichannel autonomous time and frequency counter

    Science.gov (United States)

    Szplet, R.; Kwiatkowski, P.; RóŻyc, K.; Jachna, Z.; Sondej, T.

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 106 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  9. Precise pointing knowledge for SCIAMACHY solar occultation measurements

    Directory of Open Access Journals (Sweden)

    K. Bramstedt

    2012-11-01

    Full Text Available We present a method to precisely determine the viewing direction for solar occultation instruments from scans over the solar disk. Basic idea is the fit of the maximum intensity during the scan, which corresponds to the center of the solar disk in the scanning direction. We apply this method to the solar occultation measurements of the satellite instrument SCIAMACHY, which scans the Sun in elevation direction. The achieved mean precision is 0.46 mdeg, which corresponds to an tangent height error of about 26 m for individual occultation sequences. The deviation of the derived elevation angle from the geolocation information given along with the product has a seasonal cycle with an amplitude of 2.26 mdeg, which is in tangent height an amplitude of about 127 m. The mean elevation angle offset is −4.41 mdeg (249 m. SCIAMACHY's sun follower device controls the azimuth viewing direction during the occultation measurements. The derived mean azimuth direction has an standard error of 0.65 mdeg, which is about 36 m in horizontal direction at the tangent point. We observe also a seasonal cycle of the azimuth mispointing with an amplitude of 2.3 mdeg, which is slightly increasing with time. The almost constant mean offset is 88 mdeg, which is about 5.0 km horizontal offset at the tangent point.

  10. Is digital photography an accurate and precise method for measuring range of motion of the shoulder and elbow?

    Science.gov (United States)

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2018-03-01

    Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder

  11. Excluded-Mean-Variance Neural Decision Analyzer for Qualitative Group Decision Making

    Directory of Open Access Journals (Sweden)

    Ki-Young Song

    2012-01-01

    Full Text Available Many qualitative group decisions in professional fields such as law, engineering, economics, psychology, and medicine that appear to be crisp and certain are in reality shrouded in fuzziness as a result of uncertain environments and the nature of human cognition within which the group decisions are made. In this paper we introduce an innovative approach to group decision making in uncertain situations by using a mean-variance neural approach. The key idea of this proposed approach is to compute the excluded mean of individual evaluations and weight it by applying a variance influence function (VIF; this process of weighting the excluded mean by VIF provides an improved result in the group decision making. In this paper, a case study with the proposed excluded-mean-variance approach is also presented. The results of this case study indicate that this proposed approach can improve the effectiveness of qualitative decision making by providing the decision maker with a new cognitive tool to assist in the reasoning process.

  12. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  13. Empirical single sample quantification of bias and variance in Q-ball imaging.

    Science.gov (United States)

    Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A

    2018-02-06

    The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.

  14. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2017-01-01

    The discounted stock price under the Constant Elasticity of Variance model is not a martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the price for which put-call parity holds and the price that represents the lowest cost of

  15. On the multiplicity of option prices under CEV with positive elasticity of variance

    NARCIS (Netherlands)

    Veestraeten, D.

    2014-01-01

    The discounted stock price under the Constant Elasticity of Variance (CEV) model is a strict local martingale when the elasticity of variance is positive. Two expressions for the European call price then arise, namely the risk-neutral call price and an alternative price that is linked to the unique

  16. Precision Medicine in Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Yan Liu

    2017-02-01

    Full Text Available Since President Obama announced the Precision Medicine Initiative in the United States, more and more attention has been paid to precision medicine. However, clinicians have already used it to treat conditions such as cancer. Many cardiovascular diseases have a familial presentation, and genetic variants are associated with the prevention, diagnosis, and treatment of cardiovascular diseases, which are the basis for providing precise care to patients with cardiovascular diseases. Large-scale cohorts and multiomics are critical components of precision medicine. Here we summarize the application of precision medicine to cardiovascular diseases based on cohort and omic studies, and hope to elicit discussion about future health care.

  17. The VIX, the Variance Premium, and Expected Returns

    DEFF Research Database (Denmark)

    Osterrieder, Daniela Maria; Ventosa-Santaulària, Daniel; Vera-Valdés, Eduardo

    2018-01-01

    . These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...

  18. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  19. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  20. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.