WorldWideScience

Sample records for variance standard deviation

  1. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  2. The Distance Standard Deviation

    OpenAIRE

    Edelmann, Dominic; Richards, Donald; Vogel, Daniel

    2017-01-01

    The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...

  3. Visualizing the Sample Standard Deviation

    Science.gov (United States)

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  4. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  5. Linear Estimation of Standard Deviation of Logistic Distribution ...

    African Journals Online (AJOL)

    The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...

  6. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    Science.gov (United States)

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  7. Semiparametric Bernstein–von Mises for the error standard deviation

    OpenAIRE

    Jonge, de, R.; Zanten, van, J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  8. The reinterpretation of standard deviation concept

    OpenAIRE

    Ye, Xiaoming

    2017-01-01

    Existing mathematical theory interprets the concept of standard deviation as the dispersion degree. Therefore, in measurement theory, both uncertainty concept and precision concept, which are expressed with standard deviation or times standard deviation, are also defined as the dispersion of measurement result, so that the concept logic is tangled. Through comparative analysis of the standard deviation concept and re-interpreting the measurement error evaluation principle, this paper points o...

  9. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  10. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  11. The Standard Deviation of Launch Vehicle Environments

    Science.gov (United States)

    Yunis, Isam

    2005-01-01

    Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.

  12. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz dan Mean Absolute Deviation

    Directory of Open Access Journals (Sweden)

    R. Agus Sartono

    2009-05-01

    Full Text Available Portfolio selection method which have been introduced by Harry Markowitz (1952 used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991 introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attempt to assess the VaR of two portfolios using delta normal method and historical simulation. We use the secondary data from the Jakarta Stock Exchange – LQ45 during 2003. We find that there is a weak-positive correlation between deviation standard and return in both portfolios. The VaR nolmal delta based on mean absolute deviation method eventually is higher than the VaR normal delta based on mean variance method. However, based on the historical simulation the VaR of two methods is statistically insignificant. Thus, the deviation standard is sufficient measures of portfolio risk.Keywords: optimalisasi portofolio, mean-variance, mean-absolute deviation, value-at-risk, metode delta normal, metode simulasi historis

  13. Comparing Standard Deviation Effects across Contexts

    Science.gov (United States)

    Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.

    2017-01-01

    Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…

  14. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  15. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    Science.gov (United States)

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  17. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  18. FINDING STANDARD DEVIATION OF A FUZZY NUMBER

    OpenAIRE

    Fokrul Alom Mazarbhuiya

    2017-01-01

    Two probability laws can be root of a possibility law. Considering two probability densities over two disjoint ranges, we can define the fuzzy standard deviation of a fuzzy variable with the help of the standard deviation two random variables in two disjoint spaces.

  19. 7 CFR 400.204 - Notification of deviation from standards.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...

  20. A Note on Standard Deviation and Standard Error

    Science.gov (United States)

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  1. Exploring Students' Conceptions of the Standard Deviation

    Science.gov (United States)

    delMas, Robert; Liu, Yan

    2005-01-01

    This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…

  2. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz Dan Mean Absolute Deviation

    OpenAIRE

    Sartono, R. Agus; Setiawan, Arie Andika

    2006-01-01

    Portfolio selection method which have been introduced by Harry Markowitz (1952) used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991) introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR) is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attem...

  3. [Roaming through methodology. XXXVIII. Common misconceptions involving standard deviation and standard error

    NARCIS (Netherlands)

    Mokkink, H.G.A.

    2002-01-01

    Standard deviation and standard error have a clear mutual relationship, but at the same time they differ strongly in the type of information they supply. This can lead to confusion and misunderstandings. Standard deviation describes the variability in a sample of measures of a variable, for instance

  4. Hearing protector performance and standard deviation.

    Science.gov (United States)

    Williams, W; Dillon, H

    2005-01-01

    The attenuation performance of a hearing protector is used to estimate the protected exposure level of the user. The aim is to reduce the exposed level to an acceptable value. Users should expect the attenuation to fall within a reasonable range of values around a norm. However, an analysis of extensive test data indicates that there is a negative relationship between attenuation performance and the standard deviation. This result is deduced using a variation in the method of calculating a single number rating of attenuation that is more amenable to drawing statistical inferences. As performance is typically specified as a function of the mean attenuation minus one or two standard deviations from the mean to ensure that greater than 50% of the wearer population are well protected, the implication of increasing standard deviation with decreasing attenuation found in this study means that a significant number of users are, in fact, experiencing over-protection. These users may be disinclined to use their hearing protectors because of an increased feeling of acoustic isolation. This problem is exacerbated in areas with lower noise levels.

  5. 7 CFR 400.174 - Notification of deviation from financial standards.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...

  6. SAMPLE STANDARD DEVIATION(s) CHART UNDER THE ASSUMPTION OF MODERATENESS AND ITS PERFORMANCE ANALYSIS

    OpenAIRE

    Kalpesh S. Tailor

    2017-01-01

    Moderate distribution proposed by Naik V.D and Desai J.M., is a sound alternative of normal distribution, which has mean and mean deviation as pivotal parameters and which has properties similar to normal distribution. Mean deviation (δ) is a very good alternative of standard deviation (σ) as mean deviation is considered to be the most intuitively and rationally defined measure of dispersion. This fact can be very useful in the field of quality control to construct the control limits of the c...

  7. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  8. 29 CFR 1926.2 - Variances from safety and health standards.

    Science.gov (United States)

    2010-07-01

    ... from safety and health standards. (a) Variances from standards which are, or may be, published in this... 29 Labor 8 2010-07-01 2010-07-01 false Variances from safety and health standards. 1926.2 Section 1926.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION...

  9. A robust standard deviation control chart

    NARCIS (Netherlands)

    Schoonhoven, M.; Does, R.J.M.M.

    2012-01-01

    This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse

  10. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    Science.gov (United States)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  11. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  12. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  13. A better norm-referenced grading using the standard deviation criterion.

    Science.gov (United States)

    Chan, Wing-shing

    2014-01-01

    The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.

  14. Does standard deviation matter? Using "standard deviation" to quantify security of multistage testing.

    Science.gov (United States)

    Wang, Chun; Zheng, Yi; Chang, Hua-Hua

    2014-01-01

    With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.

  15. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    Science.gov (United States)

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  16. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  17. 1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.

    Science.gov (United States)

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...

  18. Refined multiscale fuzzy entropy based on standard deviation for biomedical signal analysis.

    Science.gov (United States)

    Azami, Hamed; Fernández, Alberto; Escudero, Javier

    2017-11-01

    Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of biomedical time series. Recent developments in the field have tried to alleviate the problem of undefined MSE values for short signals. Moreover, there has been a recent interest in using other statistical moments than the mean, i.e., variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE σ ) and mean (RCMFE μ ) to quantify the dynamical properties of spread and mean, respectively, over multiple time scales. We demonstrate the dependency of the RCMFE σ and RCMFE μ , in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. The results evidenced that the RCMFE σ and RCMFE μ values are more stable and reliable than the classical multiscale entropy ones. We also inspect the ability of using the standard deviation as well as the mean in the coarse-graining process using magnetoencephalograms in Alzheimer's disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicated that when the RCMFE μ cannot distinguish different types of dynamics of a particular time series at some scale factors, the RCMFE σ may do so, and vice versa. The results showed that RCMFE σ -based features lead to higher classification accuracies in comparison with the RCMFE μ -based ones. We also made freely available all the Matlab codes used in this study at http://dx.doi.org/10.7488/ds/1477 .

  19. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  20. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  1. Design and analysis of control charts for standard deviation with estimated parameters

    NARCIS (Netherlands)

    Schoonhoven, M.; Riaz, M.; Does, R.J.M.M.

    2011-01-01

    This paper concerns the design and analysis of the standard deviation control chart with estimated limits. We consider an extensive range of statistics to estimate the in-control standard deviation (Phase I) and design the control chart for real-time process monitoring (Phase II) by determining the

  2. Standard deviation index for stimulated Brillouin scattering suppression with different homogeneities.

    Science.gov (United States)

    Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei

    2016-05-10

    We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.

  3. Wavelength selection method with standard deviation: application to pulse oximetry.

    Science.gov (United States)

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  4. Computation of standard deviations in eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.; Prael, R.

    1990-01-01

    In Brissenden and Garlick (1985), the authors propose a modified Monte Carlo method for eigenvalue calculations, designed to decrease particle transport biases in the flux and eigenvalue estimates, and in corresponding estimates of standard deviations. Apparently a very similar method has been used by Soviet Monte Carlo specialists. The proposed method is based on the generation of ''superhistories'', chains of histories run in sequence without intervening renormalization of the fission source. This method appears to have some disadvantages, discussed elsewhere. Earlier numerical experiments suggest that biases in fluxes and eigenvalues are negligibly small, even for very small numbers of histories per generation. Now more recent experiments, run on the CRAY-XMP, tend to confirm these earlier conclusions. The new experiments, discussed in this paper, involve the solution of one-group 1D diffusion theory eigenvalue problems, in difference form, via Monte Carlo. Experiments covered a range of dominance ratios from ∼0.75 to ∼0.985. In all cases flux and eigenvalue biases were substantially smaller than one standard deviation. The conclusion that, in practice, the eigenvalue bias is negligible has strong theoretical support. (author)

  5. The standard deviation method: data analysis by classical means and by neural networks

    International Nuclear Information System (INIS)

    Bugmann, G.; Stockar, U. von; Lister, J.B.

    1989-08-01

    The Standard Deviation Method is a method for determining particle size which can be used, for instance, to determine air-bubble sizes in a fermentation bio-reactor. The transmission coefficient of an ultrasound beam through a gassy liquid is measured repetitively. Due to the displacements and random positions of the bubbles, the measurements show a scatter whose standard deviation is dependent on the bubble-size. The precise relationship between the measured standard deviation, the transmission and the particle size has been obtained from a set of computer-simulated data. (author) 9 figs., 5 refs

  6. Using variances to comply with resource conservation and recovery act treatment standards

    International Nuclear Information System (INIS)

    Ranek, N.L.

    2002-01-01

    When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.

  7. Standard deviation and standard error of the mean.

    Science.gov (United States)

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  8. Standard Test Method for Measuring Optical Angular Deviation of Transparent Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1996-01-01

    1.1 This test method covers measuring the angular deviation of a light ray imposed by transparent parts such as aircraft windscreens and canopies. The results are uncontaminated by the effects of lateral displacement, and the procedure may be performed in a relatively short optical path length. This is not intended as a referee standard. It is one convenient method for measuring angular deviations through transparent windows. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  9. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...

  10. Use of variance techniques to measure dry air-surface exchange rates

    Science.gov (United States)

    Wesely, M. L.

    1988-07-01

    The variances of fluctuations of scalar quantities can be measured and interpreted to yield indirect estimates of their vertical fluxes in the atmospheric surface layer. Strong correlations among scalar fluctuations indicate a similarity of transfer mechanisms, which is utilized in some of the variance techniques. The ratios of the standard deviations of two scalar quantities, for example, can be used to estimate the flux of one if the flux of the other is measured, without knowledge of atmospheric stability. This is akin to a modified Bowen ratio approach. Other methods such as the normalized standard-deviation technique and the correlation-coefficient technique can be utilized effectively if atmospheric stability is evaluated and certain semi-empirical functions are known. In these cases, iterative calculations involving measured variances of fluctuations of temperature and vertical wind velocity can be used in place of direct flux measurements. For a chemical sensor whose output is contaminated by non-atmospheric noise, covariances with fluctuations of scalar quantities measured with a very good signal-to-noise ratio can be used to extract the needed standard deviation. Field measurements have shown that many of these approaches are successful for gases such as ozone and sulfur dioxide, as well as for temperature and water vapor, and could be extended to other trace substances. In humid areas, it appears that water vapor fluctuations often have a higher degree of correlation to fluctuations of other trace gases than do temperature fluctuations; this makes water vapor a more reliable companion or “reference” scalar. These techniques provide some reliable research approaches but, for routine or operational measurement, they are limited by the need for fast-response sensors. Also, all variance approaches require some independent means to estimate the direction of the flux.

  11. Semiparametric Bernstein–von Mises for the error standard deviation

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  12. Semiparametric Bernstein-von Mises for the error standard deviation

    NARCIS (Netherlands)

    de Jonge, R.; van Zanten, H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein-von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  13. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  14. Robust Confidence Interval for a Ratio of Standard Deviations

    Science.gov (United States)

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  15. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...... by the satellite. The ratio between the nearly constant standard deviations of a predicted quantity (e.g. in a 25° × 25° area) and the standard deviations of Tzz in smaller cells (e.g., 1° × 1°) have been used as a scale factor in order to obtain more realistic error estimates. This procedure has been applied...

  16. Effects of central nervous system drugs on driving: speed variability versus standard deviation of lateral position as outcome measure of the on-the-road driving test.

    Science.gov (United States)

    Verster, Joris C; Roth, Thomas

    2014-01-01

    The on-the-road driving test in normal traffic is used to examine the impact of drugs on driving performance. This paper compares the sensitivity of standard deviation of lateral position (SDLP) and SD speed in detecting driving impairment. A literature search was conducted to identify studies applying the on-the-road driving test, examining the effects of anxiolytics, antidepressants, antihistamines, and hypnotics. The proportion of comparisons (treatment versus placebo) where a significant impairment was detected with SDLP and SD speed was compared. About 40% of 53 relevant papers did not report data on SD speed and/or SDLP. After placebo administration, the correlation between SDLP and SD speed was significant but did not explain much variance (r = 0.253, p = 0.0001). A significant correlation was found between ΔSDLP and ΔSD speed (treatment-placebo), explaining 48% of variance. When using SDLP as outcome measure, 67 significant treatment-placebo comparisons were found. Only 17 (25.4%) were significant when SD speed was used as outcome measure. Alternatively, for five treatment-placebo comparisons, a significant difference was found for SD speed but not for SDLP. Standard deviation of lateral position is a more sensitive outcome measure to detect driving impairment than speed variability.

  17. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  18. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  19. 40 CFR 268.44 - Variance from a treatment standard.

    Science.gov (United States)

    2010-07-01

    ... complete petition may be requested as needed to send to affected states and Regional Offices. (e) The... provide an opportunity for public comment. The final decision on a variance from a treatment standard will... than) the concentrations necessary to minimize short- and long-term threats to human health and the...

  20. 7 CFR 1724.52 - Permitted deviations from RUS construction standards.

    Science.gov (United States)

    2010-01-01

    ... neutrals to provide the required electric service to a consumer, the RUS standard transformer secondary... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.52 Permitted deviations from RUS construction...

  1. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  2. Standard deviation of scatterometer measurements from space.

    Science.gov (United States)

    Fischer, R. E.

    1972-01-01

    The standard deviation of scatterometer measurements has been derived under assumptions applicable to spaceborne scatterometers. Numerical results are presented which show that, with sufficiently long integration times, input signal-to-noise ratios below unity do not cause excessive degradation of measurement accuracy. The effects on measurement accuracy due to varying integration times and changing the ratio of signal bandwidth to IF filter-noise bandwidth are also plotted. The results of the analysis may resolve a controversy by showing that in fact statistically useful scatterometer measurements can be made from space using a 20-W transmitter, such as will be used on the S-193 experiment for Skylab-A.

  3. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  4. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    Science.gov (United States)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  5. 75 FR 383 - Canned Pacific Salmon Deviating From Identity Standard; Extension of Temporary Permit for Market...

    Science.gov (United States)

    2010-01-05

    ...] Canned Pacific Salmon Deviating From Identity Standard; Extension of Temporary Permit for Market Testing... test products designated as ``skinless and boneless sockeye salmon'' that deviate from the U.S. standard of identity for canned Pacific salmon. The extension will allow the permit holder to continue to...

  6. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  7. Dynamics of the standard deviations of three wind velocity components from the data of acoustic sounding

    Science.gov (United States)

    Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.

    2017-11-01

    Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.

  8. Deviating from the standard: effects on labor continuity and career patterns

    NARCIS (Netherlands)

    Roman, A.A.

    2006-01-01

    Deviating from a standard career path is increasingly becoming an option for individuals to combine paid labor with other important life domains. These career detours emerge in diverse labor forms such as part-time jobs, temporary working hour reductions, and labor force time-outs, used to alleviate

  9. What to use to express the variability of data: Standard deviation or standard error of mean?

    OpenAIRE

    Barde, Mohini P.; Barde, Prajakt J.

    2012-01-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As reade...

  10. Large deviations and portfolio optimization

    Science.gov (United States)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  11. Phase-I monitoring of standard deviations in multistage linear profiles

    Science.gov (United States)

    Kalaei, Mahdiyeh; Soleimani, Paria; Niaki, Seyed Taghi Akhavan; Atashgar, Karim

    2018-03-01

    In most modern manufacturing systems, products are often the output of some multistage processes. In these processes, the stages are dependent on each other, where the output quality of each stage depends also on the output quality of the previous stages. This property is called the cascade property. Although there are many studies in multistage process monitoring, there are fewer works on profile monitoring in multistage processes, especially on the variability monitoring of a multistage profile in Phase-I for which no research is found in the literature. In this paper, a new methodology is proposed to monitor the standard deviation involved in a simple linear profile designed in Phase I to monitor multistage processes with the cascade property. To this aim, an autoregressive correlation model between the stages is considered first. Then, the effect of the cascade property on the performances of three types of T 2 control charts in Phase I with shifts in standard deviation is investigated. As we show that this effect is significant, a U statistic is next used to remove the cascade effect, based on which the investigated control charts are modified. Simulation studies reveal good performances of the modified control charts.

  12. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  13. Is standard deviation of daily PM2.5 concentration associated with respiratory mortality?

    Science.gov (United States)

    Lin, Hualiang; Ma, Wenjun; Qiu, Hong; Vaughn, Michael G; Nelson, Erik J; Qian, Zhengmin; Tian, Linwei

    2016-09-01

    Studies on health effects of air pollution often use daily mean concentration to estimate exposure while ignoring daily variations. This study examined the health effects of daily variation of PM2.5. We calculated daily mean and standard deviations of PM2.5 in Hong Kong between 1998 and 2011. We used a generalized additive model to estimate the association between respiratory mortality and daily mean and variation of PM2.5, as well as their interaction. We controlled for potential confounders, including temporal trends, day of the week, meteorological factors, and gaseous air pollutants. Both daily mean and standard deviation of PM2.5 were significantly associated with mortalities from overall respiratory diseases and pneumonia. Each 10 μg/m(3) increment in daily mean concentration at lag 2 day was associated with a 0.61% (95% CI: 0.19%, 1.03%) increase in overall respiratory mortality and a 0.67% (95% CI: 0.14%, 1.21%) increase in pneumonia mortality. And a 10 μg/m(3) increase in standard deviation at lag 1 day corresponded to a 1.40% (95% CI: 0.35%, 2.46%) increase in overall respiratory mortality, and a 1.80% (95% CI: 0.46%, 3.16%) increase in pneumonia mortality. We also observed a positive but non-significant synergistic interaction between daily mean and variation on respiratory mortality and pneumonia mortality. However, we did not find any significant association with mortality from chronic obstructive pulmonary diseases. Our study suggests that, besides mean concentration, the standard deviation of PM2.5 might be one potential predictor of respiratory mortality in Hong Kong, and should be considered when assessing the respiratory effects of PM2.5. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  15. Segmentation Using Symmetry Deviation

    DEFF Research Database (Denmark)

    Hollensen, Christian; Højgaard, L.; Specht, L.

    2011-01-01

    of the CT-scans into a single atlas. Afterwards the standard deviation of anatomical symmetry for the 20 normal patients was evaluated using non-rigid registration and registered onto the atlas to create an atlas for normal anatomical symmetry deviation. The same non-rigid registration was used on the 10...... hypopharyngeal cancer patients to find anatomical symmetry and evaluate it against the standard deviation of the normal patients to locate pathologic volumes. Combining the information with an absolute PET threshold of 3 Standard uptake value (SUV) a volume was automatically delineated. The overlap of automated....... The standard deviation of the anatomical symmetry, seen in figure for one patient along CT and PET, was extracted for normal patients and compared with the deviation from cancer patients giving a new way of determining cancer pathology location. Using the novel method an overlap concordance index...

  16. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  17. Determination of the relations governing the evolution of the standard deviations of the distribution of pollution

    International Nuclear Information System (INIS)

    Crabol, B.

    1985-04-01

    An original concept on the difference of behaviour of the high frequency (small-scale) and low frequency (large-scale) atmospheric turbulence relatively to the mean wind speed has been introduced. Through a dimensional analysis based on TAYLOR's formulation, it has been shown that the parameter of the atmospheric dispersion standard-deviations was the travel distance near the source, and the travel time far from the source. Using hypotheses on the energy spectrum in the atmosphere, a numerical application has made it possible to quantify the evolution of the horizontal standard deviation for different mean wind speeds between 0,2 and 10m/s. The areas of validity of the parameter (travel distance or travel time) are clearly shown. The first one is confined in the near field and is all the smaller if the wind speed decreases. For t > 5000s, the dependence on the wind speed of the horizontal standard-deviation expressed in function of the travel time becomes insignificant. The horizontal standard-deviation is only function of the travel time. Results are compared with experimental data obtained in the atmosphere. The similar evolution of the calculated and experimental curves confirms the validity of the hypothesis and input data in calculation. This study can be applied to radioactive effluents transport in the atmosphere

  18. Standard Practice for Optical Distortion and Deviation of Transparent Parts Using the Double-Exposure Method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This photographic practice determines the optical distortion and deviation of a line of sight through a simple transparent part, such as a commercial aircraft windshield or a cabin window. This practice applies to essentially flat or nearly flat parts and may not be suitable for highly curved materials. 1.2 Test Method F 801 addresses optical deviation (angluar deviation) and Test Method F 2156 addresses optical distortion using grid line slope. These test methods should be used instead of Practice F 733 whenever practical. 1.3 This standard does not purport to address the safety concerns associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  19. Quantitative evaluation of standard deviations of group velocity dispersion in optical fibre using parametric amplification

    DEFF Research Database (Denmark)

    Rishøj, Lars Søgaard; Svane, Ask Sebastian; Lund-Hansen, Toke

    2014-01-01

    A numerical model for parametric amplifiers, which include stochastic variations of the group velocity dispersion (GVD), is presented. The impact on the gain is investigated, both with respect to the magnitude of the variations and by the effect caused by changing the wavelength of the pump. It i....... It is demonstrated that the described model is able to predict the experimental results and thereby provide a quantitative evaluation of the standard deviation of the GVD. For the investigated fibre, a standard deviation of 0.01 ps/(nm km) was found....

  20. Clustering Indian Ocean Tropical Cyclone Tracks by the Standard Deviational Ellipse

    Directory of Open Access Journals (Sweden)

    Md. Shahinoor Rahman

    2018-05-01

    Full Text Available The standard deviational ellipse is useful to analyze the shape and the length of a tropical cyclone (TC track. Cyclone intensity at each six-hour position is used as the weight at that location. Only named cyclones in the Indian Ocean since 1981 are considered for this study. The K-means clustering algorithm is used to cluster Indian Ocean cyclones based on the five parameters: x-y coordinates of the mean center, variances along zonal and meridional directions, and covariance between zonal and meridional locations of the cyclone track. Four clusters are identified across the Indian Ocean; among them, only one cluster is in the North Indian Ocean (NIO and the rest of them are in the South Indian Ocean (SIO. Other characteristics associated with each cluster, such as wind speed, lifespan, track length, track orientation, seasonality, landfall, category during landfall, total accumulated cyclone energy (ACE, and cyclone trend, are analyzed and discussed. Cyclone frequency and energy of Cluster 4 (in the NIO have been following a linear increasing trend. Cluster 4 also has a higher number of landfall cyclones compared to other clusters. Cluster 2, located in the middle of the SIO, is characterized by the long track, high intensity, long lifespan, and high accumulated energy. Sea surface temperature (SST and outgoing longwave radiation (OLR associated with genesis of TCs are also examined in each cluster. Cyclone genesis is co-located with the negative OLR anomaly and the positive SST anomaly. Localized SST anomalies are associated with clusters in the SIO; however, TC geneses of Cluster 4 are associated with SSTA all over the Indian Ocean (IO.

  1. More recent robust methods for the estimation of mean and standard deviation of data

    International Nuclear Information System (INIS)

    Kanisch, G.

    2003-01-01

    Outliers in a data set result in biased values of mean and standard deviation. One way to improve the estimation of a mean is to apply tests to identify outliers and to exclude them from the calculations. Tests according to Grubbs or to Dixon, which are frequently used in practice, especially within laboratory intercomparisons, are not very efficient in identifying outliers. Since more than ten years now so-called robust methods are used more and more, which determine mean and standard deviation by iteration and down-weighting values far from the mean, thereby diminishing the impact of outliers. In 1989 the Analytical Methods Committee of the British Royal Chemical Society published such a robust method. Since 1993 the US Environmental Protection Agency published a more efficient and quite versatile method. Mean and standard deviation are calculated by iteration and application of a special weight function for down-weighting outlier candidates. In 2000, W. Cofino et al. published a very efficient robust method which works quite different from the others. It applies methods taken from the basics of quantum mechanics, such as ''wave functions'' associated with each laboratory mean value and matrix algebra (solving eigenvalue problems). In contrast to the other ones, this method includes the individual measurement uncertainties. (orig.)

  2. The variance of length of stay and the optimal DRG outlier payments.

    Science.gov (United States)

    Felder, Stefan

    2009-09-01

    Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.

  3. Improving IQ measurement in intellectual disabilities using true deviation from population norms.

    Science.gov (United States)

    Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David

    2014-01-01

    Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.

  4. Distribution of Standard deviation of an observable among superposed states

    OpenAIRE

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-01-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of in...

  5. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  6. Estimating maize water stress by standard deviation of canopy temperature in thermal imagery

    Science.gov (United States)

    A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...

  7. WASP (Write a Scientific Paper) using Excel -5: Quartiles and standard deviation.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The almost inevitable descriptive statistics exercise that is undergone once data collection is complete, prior to inferential statistics, requires the acquisition of basic descriptors which may include standard deviation and quartiles. This paper provides pointers as to how to do this in Microsoft Excel™ and explains the relationship between the two. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  9. An estimator for the standard deviation of a natural frequency. I.

    Science.gov (United States)

    Schiff, A. J.; Bogdanoff, J. L.

    1971-01-01

    A brief review of mean-square approximate systems is given. The case in which the masses are deterministic is considered first in the derivation of an estimator for the upper bound of the standard deviation of a natural frequency. Two examples presented include a two-degree-of-freedom system and a case in which the disorder in the springs is perfectly correlated. For purposes of comparison, a Monte Carlo simulation was done on a digital computer.

  10. Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct

    Energy Technology Data Exchange (ETDEWEB)

    Ueki, Kotaro [Ship Research Inst., Mitaka, Tokyo (Japan)

    1998-03-01

    Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann`s transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)

  11. The gait standard deviation, a single measure of kinematic variability.

    Science.gov (United States)

    Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren

    2016-05-01

    Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A standard deviation selection in evolutionary algorithm for grouper fish feed formulation

    Science.gov (United States)

    Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul

    2016-10-01

    Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.

  13. What to use to express the variability of data: Standard deviation or standard error of mean?

    Science.gov (United States)

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  14. A stochastic model for the derivation of economic values and their standard deviations for production and functional traits in dairy cattle

    DEFF Research Database (Denmark)

    Nielsen, Hanne-Marie; Groen, A F; Østergaard, Søren

    2006-01-01

    The objective of this paper was to present a model of a dairy cattle production system for the derivation of economic values and their standard deviations for both production and functional traits under Danish production circumstances. The stochastic model used is dynamic, and simulates production...... was -0.94 €/day per cow-year. Standard deviations of economic values expressing variation in realised profit of a farm before and after a genetic change were computed using a linear Taylor series expansion. Expressed as coefficient of variation, standard deviations of economic values based on 1000...

  15. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  16. 14 CFR 21.609 - Approval for deviation.

    Science.gov (United States)

    2010-01-01

    ... deviation. (a) Each manufacturer who requests approval to deviate from any performance standard of a TSO shall show that the standards from which a deviation is requested are compensated for by factors or... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Approval for deviation. 21.609 Section 21...

  17. Distribution of standard deviation of an observable among superposed states

    International Nuclear Information System (INIS)

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-01-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of incompatibility of two observables subject to a given state and show how the incompatibility subject to a superposition state is distributed.

  18. Distribution of standard deviation of an observable among superposed states

    Science.gov (United States)

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-10-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of incompatibility of two observables subject to a given state and show how the incompatibility subject to a superposition state is distributed.

  19. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  20. Depth (Standard Deviation) Layer used to identify, delineate and classify moderate-depth benthic habitats around St. John, USVI

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Standard deviation of depth was calculated from the bathymetry surface for each cell using the ArcGIS Spatial Analyst Focal Statistics "STD" parameter. Standard...

  1. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  2. Variance of indoor radon concentration: Major influencing factors

    Energy Technology Data Exchange (ETDEWEB)

    Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.

  3. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    Science.gov (United States)

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  4. The two errors of using the within-subject standard deviation (WSD) as the standard error of a reliable change index.

    Science.gov (United States)

    Maassen, Gerard H

    2010-08-01

    In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.

  5. Standard deviation of wind direction as a function of time; three hours to five hundred seventy-six hours

    International Nuclear Information System (INIS)

    Culkowski, W.M.

    1976-01-01

    The standard deviation of horizontal wind direction sigma/sub theta/ increases with time of averaging up to a maximum value of 104 0 . The average standard deviation of horizontal wind directions averaged over periods of 3, 5, 10, 16, 24, 36, 48, 72, 144, 288, and 576 hours were calculated from wind data obtained from a 100 meter tower in the Oak Ridge area. For periods up to 100 hours, sigma/sub theta/ varies as t/sup .28/; after 100 hours sigma/sub theta/ varies as 6.5 ln t

  6. Muon’s (g-2): the obstinate deviation from the Standard Model

    CERN Multimedia

    Antonella Del Rosso

    2011-01-01

    It’s been 50 years since a small group at CERN measured the muon (g-2) for the first time. Several other experiments have followed over the years. The latest measurement at Brookhaven (2004) gave a value that obstinately remains about 3 standard deviations away from the prediction of the Standard Model. Francis Farley, one of the fathers of the (g-2) experiments, argues that a statement such as “everything we observe is accounted for by the Standard Model” is not acceptable.   Francis J. M. Farley. Francis J. M. Farley, Fellow of the Royal Society since 1972 and the 1980 winner of the Hughes Medal "for his ultra-precise measurements of the muon magnetic moment, a severe test of quantum electrodynamics and of the nature of the muon", is among the scientists who still look at the (g-2) anomaly as one of the first proofs of the existence of new physics. “Although it seems to be generally believed that all experiments agree with the Stan...

  7. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  8. Multiplicative surrogate standard deviation: a group metric for the glycemic variability of individual hospitalized patients.

    Science.gov (United States)

    Braithwaite, Susan S; Umpierrez, Guillermo E; Chase, J Geoffrey

    2013-09-01

    Group metrics are described to quantify blood glucose (BG) variability of hospitalized patients. The "multiplicative surrogate standard deviation" (MSSD) is the reverse-transformed group mean of the standard deviations (SDs) of the logarithmically transformed BG data set of each patient. The "geometric group mean" (GGM) is the reverse-transformed group mean of the means of the logarithmically transformed BG data set of each patient. Before reverse transformation is performed, the mean of means and mean of SDs each has its own SD, which becomes a multiplicative standard deviation (MSD) after reverse transformation. Statistical predictions and comparisons of parametric or nonparametric tests remain valid after reverse transformation. A subset of a previously published BG data set of 20 critically ill patients from the first 72 h of treatment under the SPRINT protocol was transformed logarithmically. After rank ordering according to the SD of the logarithmically transformed BG data of each patient, the cohort was divided into two equal groups, those having lower or higher variability. For the entire cohort, the GGM was 106 (÷/× 1.07) mg/dl, and MSSD was 1.24 (÷/× 1.07). For the subgroups having lower and higher variability, respectively, the GGM did not differ, 104 (÷/× 1.07) versus 109 (÷/× 1.07) mg/dl, but the MSSD differed, 1.17 (÷/× 1.03) versus 1.31 (÷/× 1.05), p = .00004. By using the MSSD with its MSD, groups can be characterized and compared according to glycemic variability of individual patient members. © 2013 Diabetes Technology Society.

  9. Age-independent anti-Müllerian hormone (AMH) standard deviation scores to estimate ovarian function.

    Science.gov (United States)

    Helden, Josef van; Weiskirchen, Ralf

    2017-06-01

    To determine single year age-specific anti-Müllerian hormone (AMH) standard deviation scores (SDS) for women associated to normal ovarian function and different ovarian disorders resulting in sub- or infertility. Determination of particular year median and mean AMH values with standard deviations (SD), calculation of age-independent cut off SDS for the discrimination between normal ovarian function and ovarian disorders. Single-year-specific median, mean, and SD values have been evaluated for the Beckman Access AMH immunoassay. While the decrease of both median and mean AMH values is strongly correlated with increasing age, calculated SDS values have been shown to be age independent with the differentiation between normal ovarian function measured as occurred ovulation with sufficient luteal activity compared with hyperandrogenemic cycle disorders or anovulation associated with high AMH values and reduced ovarian activity or insufficiency associated with low AMH, respectively. These results will be helpful for the treatment of patients and the ventilation of the different reproductive options. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Testing of Software Routine to Determine Deviate and Cumulative Probability: ModStandardNormal Version 1.0

    International Nuclear Information System (INIS)

    A.H. Monib

    1999-01-01

    The purpose of this calculation is to document that the software routine ModStandardNomal Version 1.0 which is a Visual Fortran 5.0 module, provides correct results for a normal distribution up to five significant figures (three significant figures at the function tails) for a specified range of input parameters. The software routine may be used for quality affecting work. Two types of output are generated in ModStandardNomal: a deviate, x, given a cumulative probability, p, between 0 and 1; and a cumulative probability, p, given a deviate, x, between -8 and 8. This calculation supports Performance Assessment, under Technical Product Development Plan, TDP-EBS-MD-000006 (Attachment I, DIRS 3) and is written in accordance with the AP-3.12Q Calculations procedure (Attachment I, DIRS 4)

  11. Image contrast enhancement based on a local standard deviation model

    International Nuclear Information System (INIS)

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-01-01

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm

  12. MUSiC - Model-independent search for deviations from Standard Model predictions in CMS

    Science.gov (United States)

    Pieta, Holger

    2010-02-01

    We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )

  13. Prognostic implications of mutation-specific QTc standard deviation in congenital long QT syndrome.

    Science.gov (United States)

    Mathias, Andrew; Moss, Arthur J; Lopes, Coeli M; Barsheshet, Alon; McNitt, Scott; Zareba, Wojciech; Robinson, Jennifer L; Locati, Emanuela H; Ackerman, Michael J; Benhorin, Jesaia; Kaufman, Elizabeth S; Platonov, Pyotr G; Qi, Ming; Shimizu, Wataru; Towbin, Jeffrey A; Michael Vincent, G; Wilde, Arthur A M; Zhang, Li; Goldenberg, Ilan

    2013-05-01

    Individual corrected QT interval (QTc) may vary widely among carriers of the same long QT syndrome (LQTS) mutation. Currently, neither the mechanism nor the implications of this variable penetrance are well understood. To hypothesize that the assessment of QTc variance in patients with congenital LQTS who carry the same mutation provides incremental prognostic information on the patient-specific QTc. The study population comprised 1206 patients with LQTS with 95 different mutations and ≥ 5 individuals who carry the same mutation. Multivariate Cox proportional hazards regression analysis was used to assess the effect of mutation-specific standard deviation of QTc (QTcSD) on the risk of cardiac events (comprising syncope, aborted cardiac arrest, and sudden cardiac death) from birth through age 40 years in the total population and by genotype. Assessment of mutation-specific QTcSD showed large differences among carriers of the same mutations (median QTcSD 45 ms). Multivariate analysis showed that each 20 ms increment in QTcSD was associated with a significant 33% (P = .002) increase in the risk of cardiac events after adjustment for the patient-specific QTc duration and the family effect on QTc. The risk associated with QTcSD was pronounced among patients with long QT syndrome type 1 (hazard ratio 1.55 per 20 ms increment; P<.001), whereas among patients with long QT syndrome type 2, the risk associated with QTcSD was not statistically significant (hazard ratio 0.99; P = .95; P value for QTcSD-by-genotype interaction = .002). Our findings suggest that mutations with a wider variation in QTc duration are associated with increased risk of cardiac events. These findings appear to be genotype-specific, with a pronounced effect among patients with the long QT syndrome type 1 genotype. Copyright © 2013. Published by Elsevier Inc.

  14. 40 CFR 260.31 - Standards and criteria for variances from classification as a solid waste.

    Science.gov (United States)

    2010-07-01

    ... from classification as a solid waste. 260.31 Section 260.31 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.31 Standards and criteria for variances from classification as a solid waste. (a) The...

  15. Final height in survivors of childhood cancer compared with Height Standard Deviation Scores at diagnosis

    NARCIS (Netherlands)

    Knijnenburg, S. L.; Raemaekers, S.; van den Berg, H.; van Dijk, I. W. E. M.; Lieverst, J. A.; van der Pal, H. J.; Jaspers, M. W. M.; Caron, H. N.; Kremer, L. C.; van Santen, H. M.

    2013-01-01

    Our study aimed to evaluate final height in a cohort of Dutch childhood cancer survivors (CCS) and assess possible determinants of final height, including height at diagnosis. We calculated standard deviation scores (SDS) for height at initial cancer diagnosis and height in adulthood in a cohort of

  16. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  17. Standard deviation analysis of the mastoid fossa temperature differential reading: a potential model for objective chiropractic assessment.

    Science.gov (United States)

    Hart, John

    2011-03-01

    This study describes a model for statistically analyzing follow-up numeric-based chiropractic spinal assessments for an individual patient based on his or her own baseline. Ten mastoid fossa temperature differential readings (MFTD) obtained from a chiropractic patient were used in the study. The first eight readings served as baseline and were compared to post-adjustment readings. One of the two post-adjustment MFTD readings fell outside two standard deviations of the baseline mean and therefore theoretically represents improvement according to pattern analysis theory. This study showed how standard deviation analysis may be used to identify future outliers for an individual patient based on his or her own baseline data. Copyright © 2011 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  18. First among Others? Cohen's "d" vs. Alternative Standardized Mean Group Difference Measures

    Science.gov (United States)

    Cahan, Sorel; Gamliel, Eyal

    2011-01-01

    Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…

  19. Standard deviation of luminance distribution affects lightness and pupillary response.

    Science.gov (United States)

    Kanari, Kei; Kaneko, Hirohiko

    2014-12-01

    We examined whether the standard deviation (SD) of luminance distribution serves as information of illumination. We measured the lightness of a patch presented in the center of a scrambled-dot pattern while manipulating the SD of the luminance distribution. Results showed that lightness decreased as the SD of the surround stimulus increased. We also measured pupil diameter while viewing a similar stimulus. The pupil diameter decreased as the SD of luminance distribution of the stimuli increased. We confirmed that these results were not obtained because of the increase of the highest luminance in the stimulus. Furthermore, results of field measurements revealed a correlation between the SD of luminance distribution and illuminance in natural scenes. These results indicated that the visual system refers to the SD of the luminance distribution in the visual stimulus to estimate the scene illumination.

  20. Quantitative angle-insensitive flow measurement using relative standard deviation OCT.

    Science.gov (United States)

    Zhu, Jiang; Zhang, Buyun; Qi, Li; Wang, Ling; Yang, Qiang; Zhu, Zhuqing; Huo, Tiancheng; Chen, Zhongping

    2017-10-30

    Incorporating different data processing methods, optical coherence tomography (OCT) has the ability for high-resolution angiography and quantitative flow velocity measurements. However, OCT angiography cannot provide quantitative information of flow velocities, and the velocity measurement based on Doppler OCT requires the determination of Doppler angles, which is a challenge in a complex vascular network. In this study, we report on a relative standard deviation OCT (RSD-OCT) method which provides both vascular network mapping and quantitative information for flow velocities within a wide range of Doppler angles. The RSD values are angle-insensitive within a wide range of angles, and a nearly linear relationship was found between the RSD values and the flow velocities. The RSD-OCT measurement in a rat cortex shows that it can quantify the blood flow velocities as well as map the vascular network in vivo .

  1. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  2. Heterogeneidade de variâncias na avaliação genética de búfalas no Brasil Heterogeneity of variances on genetic evaluation of buffaloes in Brazil

    Directory of Open Access Journals (Sweden)

    Antonia Kécya França Moita

    2010-07-01

    restricted maximum likelihood method was used to estimate the (covariance components using four bi-trait models, considering season and herd-year of birth as fixed effects and age of the cow as covariable (linear and quadratic effects. The following models were used: additive; repeatability; additive with sire x herd-year interaction; and repeatability with sire x herd-year interaction. The herds were classified in two classes of phenotipic standard deviation for milk production and bi-traits analyses were carried out considering each class of standard deviation as a different characteristic. A single trait analysis was also carried out, disregarding phenotypic standard deviation classes, including sire x herd-year interaction effect. The estimates of additive genetic variance components were higher in the high standard deviation class than those of low standard deviation. Most of the animals selected from files without stratification was selected for high standard deviation. Despite of the increase in additive variances and the error in high standard deviation classes, their heritability were lower, except for model 2, whose heritability was higher for the class with high standard deviation. When herds are classified into high and low phenotypic standard deviation and milk production in the different classes is evaluated in a model trait, genetic evaluation takes into account the heterogeneity of variances among herds.

  3. Adjoint-based global variance reduction approach for reactor analysis problems

    International Nuclear Information System (INIS)

    Zhang, Qiong; Abdel-Khalik, Hany S.

    2011-01-01

    A new variant of a hybrid Monte Carlo-Deterministic approach for simulating particle transport problems is presented and compared to the SCALE FW-CADIS approach. The new approach, denoted by the Subspace approach, optimizes the selection of the weight windows for reactor analysis problems where detailed properties of all fuel assemblies are required everywhere in the reactor core. Like the FW-CADIS approach, the Subspace approach utilizes importance maps obtained from deterministic adjoint models to derive automatic weight-window biasing. In contrast to FW-CADIS, the Subspace approach identifies the correlations between weight window maps to minimize the computational time required for global variance reduction, i.e., when the solution is required everywhere in the phase space. The correlations are employed to reduce the number of maps required to achieve the same level of variance reduction that would be obtained with single-response maps. Numerical experiments, serving as proof of principle, are presented to compare the Subspace and FW-CADIS approaches in terms of the global reduction in standard deviation. (author)

  4. Analyzing Vegetation Change in an Elephant-Impacted Landscape Using the Moving Standard Deviation Index

    Directory of Open Access Journals (Sweden)

    Timothy J. Fullman

    2014-01-01

    Full Text Available Northern Botswana is influenced by various socio-ecological drivers of landscape change. The African elephant (Loxodonta africana is one of the leading sources of landscape shifts in this region. Developing the ability to assess elephant impacts on savanna vegetation is important to promote effective management strategies. The Moving Standard Deviation Index (MSDI applies a standard deviation calculation to remote sensing imagery to assess degradation of vegetation. Used previously for assessing impacts of livestock on rangelands, we evaluate the ability of the MSDI to detect elephant-modified vegetation along the Chobe riverfront in Botswana, a heavily elephant-impacted landscape. At broad scales, MSDI values are positively related to elephant utilization. At finer scales, using data from 257 sites along the riverfront, MSDI values show a consistent negative relationship with intensity of elephant utilization. We suggest that these differences are due to varying effects of elephants across scales. Elephant utilization of vegetation may increase heterogeneity across the landscape, but decrease it within heavily used patches, resulting in the observed MSDI pattern of divergent trends at different scales. While significant, the low explanatory power of the relationship between the MSDI and elephant utilization suggests the MSDI may have limited use for regional monitoring of elephant impacts.

  5. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  6. Fidelity deviation in quantum teleportation

    OpenAIRE

    Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir

    2018-01-01

    We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel---we here consider the so-called Werner channel. To characterize our resu...

  7. On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution

    Science.gov (United States)

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-01-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…

  8. Quantum uncertainty relation based on the mean deviation

    OpenAIRE

    Sharma, Gautam; Mukhopadhyay, Chiranjib; Sazim, Sk; Pati, Arun Kumar

    2018-01-01

    Traditional forms of quantum uncertainty relations are invariably based on the standard deviation. This can be understood in the historical context of simultaneous development of quantum theory and mathematical statistics. Here, we present alternative forms of uncertainty relations, in both state dependent and state independent forms, based on the mean deviation. We illustrate the robustness of this formulation in situations where the standard deviation based uncertainty relation is inapplica...

  9. Limitations of the relative standard deviation of win percentages for measuring competitive balance in sports leagues

    OpenAIRE

    P. Dorian Owen

    2009-01-01

    The relative standard deviation of win percentages, the most widely used measure of within-season competitive balance, has an upper bound which is very sensitive to variation in the numbers of teams and games played. Taking into account this upper bound provides additional insight into comparisons of competitive balance across leagues or over time.

  10. SU-E-I-59: Investigation of the Usefulness of a Standard Deviation and Mammary Gland Density as Indexes for Mammogram Classification.

    Science.gov (United States)

    Takarabe, S; Yabuuchi, H; Morishita, J

    2012-06-01

    To investigate the usefulness of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high- density mammary glands region to a whole mammary glands region as features for classification of mammograms into four categories based on the ACR BI-RADS breast composition. We used 36 digital mediolateral oblique view mammograms (18 patients) approved by our IRB. These images were classified into the four categories of breast compositions by an experienced breast radiologist and the results of the classification were regarded as a gold standard. First, a whole mammary region in a breast was divided into two regions such as a high-density mammary glands region and a low/iso-density mammary glands region by using a threshold value that was obtained from the pixel values corresponding to a pectoral muscle region. Then the percentage of a high-density mammary glands region to a whole mammary glands region was calculated. In addition, as a new method, the standard deviation of pixel values in a whole mammary glands region was calculated as an index based on the intermingling of mammary glands and fats. Finally, all mammograms were classified by using the combination of the percentage of a high-density mammary glands region and the standard deviation of each image. The agreement rates of the classification between our proposed method and gold standard was 86% (31/36). This result signified that our method has the potential to classify mammograms. The combination of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high-density mammary glands region to a whole mammary glands region was available as features to classify mammograms based on the ACR BI- RADS breast composition. © 2012 American Association of Physicists in Medicine.

  11. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    Science.gov (United States)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  12. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    Science.gov (United States)

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  13. Inverse correlation between the standard deviation of R-R intervals in supine position and the simplified menopausal index in women with climacteric symptoms.

    Science.gov (United States)

    Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio

    2014-06-01

    Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.

  14. Standard operation procedures for conducting the on-the-road driving test, and measurement of the standard deviation of lateral position (SDLP).

    Science.gov (United States)

    Verster, Joris C; Roth, Thomas

    2011-01-01

    This review discusses the methodology of the standardized on-the-road driving test and standard operation procedures to conduct the test and analyze the data. The on-the-road driving test has proven to be a sensitive and reliable method to examine driving ability after administration of central nervous system (CNS) drugs. The test is performed on a public highway in normal traffic. Subjects are instructed to drive with a steady lateral position and constant speed. Its primary parameter, the standard deviation of lateral position (SDLP), ie, an index of 'weaving', is a stable measure of driving performance with high test-retest reliability. SDLP differences from placebo are dose-dependent, and do not depend on the subject's baseline driving skills (placebo SDLP). It is important that standard operation procedures are applied to conduct the test and analyze the data in order to allow comparisons between studies from different sites.

  15. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  16. Odds per adjusted standard deviation: comparing strengths of associations for risk factors measured on different scales and across diseases and populations.

    Science.gov (United States)

    Hopper, John L

    2015-11-15

    How can the "strengths" of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors-and that is how risk gradients are interpreted-so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RR(s). This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Introducing the Mean Absolute Deviation "Effect" Size

    Science.gov (United States)

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  18. New reference charts for testicular volume in Dutch children and adolescents allow the calculation of standard deviation scores

    NARCIS (Netherlands)

    Joustra, S.D.; Plas, E.M. van der; Goede, J.; Oostdijk, W.; Delemarre-van de Waal, H.A.; Hack, W.W.M.; Buuren, S. van; Wit, J.M.

    2015-01-01

    Aim Accurate calculations of testicular volume standard deviation (SD) scores are not currently available. We constructed LMS-smoothed age-reference charts for testicular volume in healthy boys. Methods The LMS method was used to calculate reference data, based on testicular volumes from

  19. Development and operation of a quality assurance system for deviations from standard operating procedures in a clinical cell therapy laboratory.

    Science.gov (United States)

    McKenna, D; Kadidlo, D; Sumstad, D; McCullough, J

    2003-01-01

    Errors and accidents, or deviations from standard operating procedures, other policy, or regulations must be documented and reviewed, with corrective actions taken to assure quality performance in a cellular therapy laboratory. Though expectations and guidance for deviation management exist, a description of the framework for the development of such a program is lacking in the literature. Here we describe our deviation management program, which uses a Microsoft Access database and Microsoft Excel to analyze deviations and notable events, facilitating quality assurance (QA) functions and ongoing process improvement. Data is stored in a Microsoft Access database with an assignment to one of six deviation type categories. Deviation events are evaluated for potential impact on patient and product, and impact scores for each are determined using a 0- 4 grading scale. An immediate investigation occurs, and corrective actions are taken to prevent future similar events from taking place. Additionally, deviation data is collectively analyzed on a quarterly basis using Microsoft Excel, to identify recurring events or developing trends. Between January 1, 2001 and December 31, 2001 over 2500 products were processed at our laboratory. During this time period, 335 deviations and notable events occurred, affecting 385 products and/or patients. Deviations within the 'technical error' category were most common (37%). Thirteen percent of deviations had a patient and/or a product impact score > or = 2, a score indicating, at a minimum, potentially affected patient outcome or moderate effect upon product quality. Real-time analysis and quarterly review of deviations using our deviation management program allows for identification and correction of deviations. Monitoring of deviation trends allows for process improvement and overall successful functioning of the QA program in the cell therapy laboratory. Our deviation management program could serve as a model for other laboratories in

  20. U.S. Navy Marine Climatic Atlas of the World. Volume IX. World-Wide Means and Standard Deviations

    Science.gov (United States)

    1981-10-01

    TITLE (..d SobtII,) S. TYPE OF REPORT & PERIOD COVERED U. S. Navy Marine Climatic Atlas of the World Volume IX World-wide Means and Standard Reference...Ives the best estimate of the population standard deviations. The means, , are com~nuted from: EX IIN I 90 80 70 60" 50’ 40, 30 20 10 0 1070 T- VErr ...or 10%, whichever is greater Since the mean ice limit approximates the minus two de l temperature isopleth, this analyzed lower limit was Wave Heights

  1. A Study of the Causes of Man-Hour Variance of Naval Shipyard Work Standards (The National Shipbuilding Research Program)

    National Research Council Canada - National Science Library

    Bunch, Howard M

    1989-01-01

    This paper is a presentation of the results of a study conducted at a U.S. Navy shipyard during 1987 concerning the relationship between engineering standards and the variances that were occurring in production budget and charged manhours...

  2. Standard Deviation of Spatially-Averaged Surface Cross Section Data from the TRMM Precipitation Radar

    Science.gov (United States)

    Meneghini, Robert; Jones, Jeffrey A.

    2010-01-01

    We investigate the spatial variability of the normalized radar cross section of the surface (NRCS or Sigma(sup 0)) derived from measurements of the TRMM Precipitation Radar (PR) for the period from 1998 to 2009. The purpose of the study is to understand the way in which the sample standard deviation of the Sigma(sup 0) data changes as a function of spatial resolution, incidence angle, and surface type (land/ocean). The results have implications regarding the accuracy by which the path integrated attenuation from precipitation can be inferred by the use of surface scattering properties.

  3. New g-2 measurement deviates further from Standard Model

    CERN Multimedia

    2004-01-01

    "The latest result from an international collaboration of scientists investigating how the spin of a muon is affected as this type of subatomic particle moves through a magnetic field deviates further than previous measurements from theoretical predictions" (1 page).

  4. Longitudinal Analysis of Residual Feed Intake in Mink using Random Regression with Heterogeneous Residual Variance

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik

    Heritability of residual feed intake (RFI) increased from low to high over the growing period in male and female mink. The lowest heritability for RFI (male: 0.04 ± 0.01 standard deviation (SD); female: 0.05 ± 0.01 SD) was in early and the highest heritability (male: 0.33 ± 0.02; female: 0.34 ± 0.......02 SD) was achieved at the late growth stages. The genetic correlation between different growth stages for RFI showed a high association (0.91 to 0.98) between early and late growing periods. However, phenotypic correlations were lower from 0.29 to 0.50. The residual variances were substantially higher...

  5. Small-Volume Injections: Evaluation of Volume Administration Deviation From Intended Injection Volumes.

    Science.gov (United States)

    Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B

    2017-10-01

    regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.

  6. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Edward W. Larsen

    2008-06-01

    The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due

  7. Study of Railway Track Irregularity Standard Deviation Time Series Based on Data Mining and Linear Model

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2013-01-01

    Full Text Available Good track geometry state ensures the safe operation of the railway passenger service and freight service. Railway transportation plays an important role in the Chinese economic and social development. This paper studies track irregularity standard deviation time series data and focuses on the characteristics and trend changes of track state by applying clustering analysis. Linear recursive model and linear-ARMA model based on wavelet decomposition reconstruction are proposed, and all they offer supports for the safe management of railway transportation.

  8. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  9. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    Science.gov (United States)

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  10. Reducing the standard deviation in multiple-assay experiments where the variation matters but the absolute value does not.

    Science.gov (United States)

    Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A

    2013-01-01

    When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.

  11. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    Directory of Open Access Journals (Sweden)

    Felipe Espinoza

    2012-05-01

    Full Text Available In this study, a camera to infrared diode (IRED distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  12. Top Yukawa deviation in extra dimension

    International Nuclear Information System (INIS)

    Haba, Naoyuki; Oda, Kin-ya; Takahashi, Ryo

    2009-01-01

    We suggest a simple one-Higgs-doublet model living in the bulk of five-dimensional spacetime compactified on S 1 /Z 2 , in which the top Yukawa coupling can be smaller than the naive standard-model expectation, i.e. the top quark mass divided by the Higgs vacuum expectation value. If we find only single Higgs particle at the LHC and also observe the top Yukawa deviation, our scenario becomes a realistic candidate beyond the standard model. The Yukawa deviation comes from the fact that the wave function profile of the free physical Higgs field can become different from that of the vacuum expectation value, due to the presence of the brane-localized Higgs potentials. In the Brane-Localized Fermion scenario, we find sizable top Yukawa deviation, which could be checked at the LHC experiment, with a dominant Higgs production channel being the WW fusion. We also study the Bulk Fermion scenario with brane-localized Higgs potential, which resembles the Universal Extra Dimension model with a stable dark matter candidate. We show that both scenarios are consistent with the current electroweak precision measurements.

  13. Excursions out-of-lane versus standard deviation of lateral position as outcome measure of the on-the-road driving test

    NARCIS (Netherlands)

    Verster, Joris C; Roth, Thomas

    BACKGROUND: The traditional outcome measure of the Dutch on-the-road driving test is the standard deviation of lateral position (SDLP), the weaving of the car. This paper explores whether excursions out-of-lane are a suitable additional outcome measure to index driving impairment. METHODS: A

  14. Determination of the relations governing trends in the standard deviations of the distribution of pollution based on observations on the atmospheric turbulence spectrum and the possibility of laboratory simulation

    International Nuclear Information System (INIS)

    Crabol, B.

    1980-01-01

    Using TAYLOR's calculation, which takes account of the low-pass filter effect of the transfer time on the value for the standard deviation of particle dispersion, we have introduced a high-pass filter which translate the effect of the time of observation, by definition finite, onto the true atmospheric scale. It is then possible to identify those conditions under which the relations governing variation of the standard deviations of pollution distribution are dependent upon: the distance of transfer alone, the time of transfer alone. Thence, making certain simplifying assumptions, practical quantitive relationships are deduced for the variation of the horizontal standard deviation of pollution dispersion as a function of wind speed and time of transfer

  15. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  16. Fidelity deviation in quantum teleportation

    Science.gov (United States)

    Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir

    2018-04-01

    We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.

  17. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients

    DEFF Research Database (Denmark)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte

    2017-01-01

    -derived food waste amounted to 2.21 ± 3.12% with a confidence interval of (−4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson’s correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste...... and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data......, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients....

  18. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  19. A stochastic model for the derivation of economic values and their standard deviations for production and functional traits in dairy cattle

    NARCIS (Netherlands)

    Nielsen, H.M.; Groen, A.F.; Ostergaard, S.; Berg, P.

    2006-01-01

    The objective of this paper was to present a model of a dairy cattle production system for the derivation of economic values and their standard deviations for both production and functional traits under Danish production circumstances. The stochastic model used is dynamic, and simulates production

  20. Text localization using standard deviation analysis of structure elements and support vector machines

    Directory of Open Access Journals (Sweden)

    Zagoris Konstantinos

    2011-01-01

    Full Text Available Abstract A text localization technique is required to successfully exploit document images such as technical articles and letters. The proposed method detects and extracts text areas from document images. Initially a connected components analysis technique detects blocks of foreground objects. Then, a descriptor that consists of a set of suitable document structure elements is extracted from the blocks. This is achieved by incorporating an algorithm called Standard Deviation Analysis of Structure Elements (SDASE which maximizes the separability between the blocks. Another feature of the SDASE is that its length adapts according to the requirements of the application. Finally, the descriptor of each block is used as input to a trained support vector machines that classify the block as text or not. The proposed technique is also capable of adjusting to the text structure of the documents. Experimental results on benchmarking databases demonstrate the effectiveness of the proposed method.

  1. Influence of asymmetrical drawing radius deviation in micro deep drawing

    Science.gov (United States)

    Heinrich, L.; Kobayashi, H.; Shimizu, T.; Yang, M.; Vollertsen, F.

    2017-09-01

    Nowadays, an increasing demand for small metal parts in electronic and automotive industries can be observed. Deep drawing is a well-suited technology for the production of such parts due to its excellent qualities for mass production. However, the downscaling of the forming process leads to new challenges in tooling and process design, such as high relative deviation of tool geometry or blank displacement compared to the macro scale. FEM simulation has been a widely-used tool to investigate the influence of symmetrical process deviations as for instance a global variance of the drawing radius. This study shows a different approach that allows to determine the impact of asymmetrical process deviations on micro deep drawing. In this particular case the impact of an asymmetrical drawing radius deviation and blank displacement on cup geometry deviation was investigated for different drawing ratios by experiments and FEM simulation. It was found that both variations result in an increasing cup height deviation. Nevertheless, with increasing drawing ratio a constant drawing radius deviation has an increasing impact, while blank displacement results in a decreasing offset of the cups geometry. This is explained by different mechanisms that result in an uneven cup geometry. While blank displacement leads to material surplus on one side of the cup, an unsymmetrical radius deviation on the other hand generates uneven stretching of the cups wall. This is intensified for higher drawing ratios. It can be concluded that the effect of uneven radius geometry proves to be of major importance for the production of accurately shaped micro cups and cannot be compensated by intentional blank displacement.

  2. Lack of sensitivity of staffing for 8-hour sessions to standard deviation in daily actual hours of operating room time used for surgeons with long queues.

    Science.gov (United States)

    Pandit, Jaideep J; Dexter, Franklin

    2009-06-01

    At multiple facilities including some in the United Kingdom's National Health Service, the following are features of many surgical-anesthetic teams: i) there is sufficient workload for each operating room (OR) list to almost always be fully scheduled; ii) the workdays are organized such that a single surgeon is assigned to each block of time (usually 8 h); iii) one team is assigned per block; and iv) hardly ever would a team "split" to do cases in more than one OR simultaneously. We used Monte-Carlo simulation using normal and Weibull distributions to estimate the times to complete lists of cases scheduled into such 8 h sessions. For each combination of mean and standard deviation, inefficiencies of use of OR time were determined for 10 h versus 8 h of staffing. When the mean actual hours of OR time used averages standard deviation and relative cost of over-run to under-run. When mean > or = 8 h 50 min, 10 h staffing has higher OR efficiency. For 8 h 25 min standard deviation of 60 min and relative cost of over-run to under-run of 2.0 versus (b) 8 h 48 min for normal, standard deviation of 0 min and relative cost ratio of 1.50. Although the simplest decision rule would be to staff for 8 h if the mean workload is standard deviation 60 min, and relative cost ratio of 2.00, the inefficiency of use of OR time would be 34% larger if staffing were planned for 8 h instead of 10 h. For surgical teams with 8 h sessions, use the following decision rule for anesthesiology and OR nurse staffing. If actual hours of OR time used averages or = 8 h 50 min, plan 10 h staffing. For averages in between, perform the full analysis of McIntosh et al. (Anesth Analg 2006;103:1499-516).

  3. Influência da heterogeneidade de variâncias na avaliação genética de bovinos de corte da raça Tabapuã Influence of heterogeneity of variances on genetic evaluation of Tabapuã beef cattle

    Directory of Open Access Journals (Sweden)

    J.E.G. Campelo

    2003-12-01

    Full Text Available Verificou-se a influência da heterogeneidade de variâncias na avaliação genética de bovinos de corte da raça Tabapuã. Dados de pesos corrigidos aos 120, 240 e 420 dias de idade foram estratificados com base no desvio-padrão fenotípico do peso aos 120 dias dos grupos de contemporâneos em três classes: baixo (18,9kg desvio-padrão. Nas análises de múltiplas características, em que o peso foi considerado característica distinta em cada classe de desvio-padrão, constatou-se que as variâncias genéticas e residuais foram maiores com o aumento do desvio-padrão da classe. As herdabilidades foram 0,26, 0,32 e 0,37 (peso aos 120 dias, 0,28, 0,35 e 0,35 (peso aos 240 dias e 0,14, 0,18 e 0,18 (peso aos 420 dias nas classes de baixo, médio e alto desvio-padrão, respectivamente. As correlações genéticas entre o mesmo peso, nas classes de baixo e alto desvio-padrão foram inferiores a 0,80. As correlações entre os valores genéticos, obtidos de análises múltiplas e de análise geral (sem as classes, foram superiores a 0,93. Observou-se que os reprodutores seriam classificados de forma similar se for considerada ou não a presença de variâncias heterogêneas nas análises.Data from Tabapuã beef cattle were used to study the influence of variance heterogeneity on genetic evaluation. Adjusted weights at 120, 240 and 420 days of age were classified in three classes of standard deviation: low (18.9kg, based on phenotypic standard deviation of the weight at 120 days of age of the contemporary groups. Multiple trait analyses, considering each class of phenotypic standard deviation as a distinct trait, were performed. The genetic and residual variances increased as the phenotypic standard deviation of the class increased. Heritabilities for low, medium and high phenotypic standard deviation classes were 0.26, 0.32 and 0.37 (weight at 120 days, 0.28, 0.35 and 0.35 (weight at 240 days and 0.14, 0.18 and 0.18 (weight at 420 days

  4. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  5. An absolute deviation approach to assessing correlation.

    OpenAIRE

    Gorard, S.

    2015-01-01

    This paper describes two possible alternatives to the more traditional Pearson’s R correlation coefficient, both based on using the mean absolute deviation, rather than the standard deviation, as a measure of dispersion. Pearson’s R is well-established and has many advantages. However, these newer variants also have several advantages, including greater simplicity and ease of computation, and perhaps greater tolerance of underlying assumptions (such as the need for linearity). The first alter...

  6. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  7. Development of a treatability variance guidance document for US DOE mixed-waste streams

    International Nuclear Information System (INIS)

    Scheuer, N.; Spikula, R.; Harms, T.

    1990-03-01

    In response to the US Department of Energy's (DOE's) anticipated need for variances from the Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs), a treatability variance guidance document was prepared. The guidance manual is for use by DOE facilities and operations offices. The manual was prepared as a part of an ongoing effort by DOE-EH to provide guidance for the operations offices and facilities to comply with the RCRA (LDRs). A treatability variance is an alternative treatment standard granted by EPA for a restricted waste. Such a variance is not an exemption from the requirements of the LDRs, but rather is an alternative treatment standard that must be met before land disposal. The manual, Guidance For Obtaining Variance From the Treatment Standards of the RCRA Land Disposal Restrictions (1), leads the reader through the process of evaluating whether a variance from the treatment standard is a viable approach and through the data-gathering and data-evaluation processes required to develop a petition requesting a variance. The DOE review and coordination process is also described and model language for use in petitions for DOE radioactive mixed waste (RMW) is provided. The guidance manual focuses on RMW streams, however the manual also is applicable to nonmixed, hazardous waste streams. 4 refs

  8. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  9. Delay and Standard Deviation Beamforming to Enhance Specular Reflections in Ultrasound Imaging.

    Science.gov (United States)

    Bandaru, Raja Sekhar; Sornes, Anders Rasmus; Hermans, Jeroen; Samset, Eigil; D'hooge, Jan

    2016-12-01

    Although interventional devices, such as needles, guide wires, and catheters, are best visualized by X-ray, real-time volumetric echography could offer an attractive alternative as it avoids ionizing radiation; it provides good soft tissue contrast, and it is mobile and relatively cheap. Unfortunately, as echography is traditionally used to image soft tissue and blood flow, the appearance of interventional devices in conventional ultrasound images remains relatively poor, which is a major obstacle toward ultrasound-guided interventions. The objective of this paper was therefore to enhance the appearance of interventional devices in ultrasound images. Thereto, a modified ultrasound beamforming process using conventional-focused transmit beams is proposed that exploits the properties of received signals containing specular reflections (as arising from these devices). This new beamforming approach referred to as delay and standard deviation beamforming (DASD) was quantitatively tested using simulated as well as experimental data using a linear array transducer. Furthermore, the influence of different imaging settings (i.e., transmit focus, imaging depth, and scan angle) on the obtained image contrast was evaluated. The study showed that the image contrast of specular regions improved by 5-30 dB using DASD beamforming compared with traditional delay and sum (DAS) beamforming. The highest gain in contrast was observed when the interventional device was tilted away from being orthogonal to the transmit beam, which is a major limitation in standard DAS imaging. As such, the proposed beamforming methodology can offer an improved visualization of interventional devices in the ultrasound image with potential implications for ultrasound-guided interventions.

  10. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  11. Obtaining variances from the treatment standards of the RCRA Land Disposal Restrictions

    International Nuclear Information System (INIS)

    1990-05-01

    The Resource Conservation and Recovery Act (RCRA) Land Disposal Restrictions (LDRs) [40 CFR 268] impose specific requirements for treatment of RCRA hazardous wastes prior to disposal. Before the LDRs, many hazardous wastes could be land disposed at an appropriately designed and permitted facility without undergoing treatment. Thus, the LDRs constitute a major change in the regulations governing hazardous waste. EPA does not regulate the radioactive component of radioactive mixed waste (RMW). However, the hazardous waste component of an RMW is subject to RCRA LDR regulations. DOE facilities that manage hazardous wastes (including radioactive mixed wastes) may have to alter their waste-management practices to comply with the regulations. The purpose of this document is to aid DOE facilities and operations offices in determining (1) whether a variance from the treatment standard should be sought and (2) which type (treatability or equivalency) of petition is appropriate. The document also guides the user in preparing the petition. It shall be noted that the primary responsibility for the development of the treatability petition lies with the generator of the waste. 2 figs., 1 tab

  12. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  13. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  14. 40 CFR 60.2220 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for... Recordkeeping and Reporting § 60.2220 What must I include in the deviation report? In each report required under...

  15. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  16. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  17. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  18. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  19. Quantification of intravoxel velocity standard deviation and turbulence intensity by generalizing phase-contrast MRI.

    Science.gov (United States)

    Dyverfeldt, Petter; Sigfridsson, Andreas; Kvitting, John-Peder Escobar; Ebbers, Tino

    2006-10-01

    Turbulent flow, characterized by velocity fluctuations, is a contributing factor to the pathogenesis of several cardiovascular diseases. A clinical noninvasive tool for assessing turbulence is lacking, however. It is well known that the occurrence of multiple spin velocities within a voxel during the influence of a magnetic gradient moment causes signal loss in phase-contrast magnetic resonance imaging (PC-MRI). In this paper a mathematical derivation of an expression for computing the standard deviation (SD) of the blood flow velocity distribution within a voxel is presented. The SD is obtained from the magnitude of PC-MRI signals acquired with different first gradient moments. By exploiting the relation between the SD and turbulence intensity (TI), this method allows for quantitative studies of turbulence. For validation, the TI in an in vitro flow phantom was quantified, and the results compared favorably with previously published laser Doppler anemometry (LDA) results. This method has the potential to become an important tool for the noninvasive assessment of turbulence in the arterial tree.

  20. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  1. Female scarcity reduces women's marital ages and increases variance in men's marital ages.

    Science.gov (United States)

    Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom

    2010-08-05

    When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

  2. Female Scarcity Reduces Women's Marital Ages and Increases Variance in Men's Marital Ages

    Directory of Open Access Journals (Sweden)

    Daniel J. Kruger

    2010-07-01

    Full Text Available When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

  3. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  4. "First among Others? Cohen's ""d"" vs. Alternative Standardized Mean Group Difference Measures"

    Directory of Open Access Journals (Sweden)

    Sorel Cahan

    2011-06-01

    Full Text Available Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., 2, f2 evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation -' that is, measures of dispersion about the mean. In contrast, the definition of Cohen's d, the effect size measure typically computed in the two-group case, is incongruent due to a conceptual difference between the numerator -' which measures between-group variability by the intuitive and straightforward raw difference between the two group means -' and the denominator - which measures within-group variability in terms of the difference between all observations and the group mean (i.e., the pooled within-groups standard deviation, SW. Two congruent alternatives to d, in which the root square or absolute mean difference between all observation pairs is substituted for SW as the variability measure in the denominator of d, are suggested and their conceptual and statistical advantages and disadvantages are discussed.

  5. Evolutionary implications of genetic code deviations

    International Nuclear Information System (INIS)

    Chela Flores, J.

    1986-07-01

    By extending the standard genetic code into a temperature dependent regime, we propose a train of molecular events leading to alternative coding. The first few examples of these deviations have already been reported in some ciliated protozoans and Gram positive bacteria. A possible range of further alternative coding, still within the context of universality, is pointed out. (author)

  6. Excursions out-of-lane versus standard deviation of lateral position as outcome measure of the on-the-road driving test.

    Science.gov (United States)

    Verster, Joris C; Roth, Thomas

    2014-07-01

    The traditional outcome measure of the Dutch on-the-road driving test is the standard deviation of lateral position (SDLP), the weaving of the car. This paper explores whether excursions out-of-lane are a suitable additional outcome measure to index driving impairment. A literature search was conducted to search for driving tests that used both SDLP and excursions out-of-lane as outcome measures. The analyses were limited to studies examining hypnotic drugs because several of these drugs have been shown to produce next-morning sedation. Standard deviation of lateral position was more sensitive in demonstrating driving impairment. In fact, solely relying on excursions out-of-lane as outcome measure incorrectly classifies approximately half of impaired drives as unimpaired. The frequency of excursions out-of-lane is determined by the mean lateral position within the right traffic lane. Defining driving impairment as having a ΔSDLP > 2.4 cm, half of the impaired driving tests (51.2%, 43/84) failed to produce excursions out-of-lane. Alternatively, 20.9% of driving tests with ΔSDLP < 2.4 cm (27/129) had at least one excursion out-of-lane. Excursions out-of-lane are neither a suitable measure to demonstrate driving impairment nor is this measure sufficiently sensitive to differentiate adequately between differences in magnitude of driving impairment. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Standard deviation of local tallies in global Monte Carlo calculation of nuclear reactor core

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    Time series methodology has been studied to assess the feasibility of statistical error estimation in the continuous space and energy Monte Carlo calculation of the three-dimensional whole reactor core. The noise propagation was examined and the fluctuation of track length tallies for local fission rate and power has been formally shown to be represented by the autoregressive moving average process of orders p and p-1 [ARMA(p,p-1)], where p is an integer larger than or equal to two. Therefore, ARMA(p,p-1) fitting was applied to the real standard deviation estimation of the power of fuel assemblies at particular heights. Numerical results indicate that straightforward ARMA(3,2) fitting is promising, but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method with a batch size larger than 100 and smaller than 200 cycles for a 1,100 MWe pressurized water reactor. (author)

  8. Genomic selection of crossing partners on basis of the expected mean and variance of their derived lines.

    Science.gov (United States)

    Osthushenrich, Tanja; Frisch, Matthias; Herzog, Eva

    2017-01-01

    In a line or a hybrid breeding program superior lines are selected from a breeding pool as parental lines for the next breeding cycle. From a cross of two parental lines, new lines are derived by single-seed descent (SSD) or doubled haploid (DH) technology. However, not all possible crosses between the parental lines can be carried out due to limited resources. Our objectives were to present formulas to characterize a cross by the mean and variance of the genotypic values of the lines derived from the cross, and to apply the formulas to predict means and variances of flowering time traits in recombinant inbred line families of a publicly available data set in maize. We derived formulas which are based on the expected linkage disequilibrium (LD) between two loci and which can be used for arbitrary mating systems. Results were worked out for SSD and DH lines derived from a cross after an arbitrary number of intermating generations. The means and variances were highly correlated with results obtained by the simulation software PopVar. Compared with these simulations, computation time for our closed formulas was about ten times faster. The means and variances for flowering time traits observed in the recombinant inbred line families of the investigated data set showed correlations of around 0.9 for the means and of 0.46 and 0.65 for the standard deviations with the estimated values. We conclude that our results provide a framework that can be exploited to increase the efficiency of hybrid and line breeding programs by extending genomic selection approaches to the selection of crossing partners.

  9. Genomic selection of crossing partners on basis of the expected mean and variance of their derived lines

    Science.gov (United States)

    Osthushenrich, Tanja; Frisch, Matthias

    2017-01-01

    In a line or a hybrid breeding program superior lines are selected from a breeding pool as parental lines for the next breeding cycle. From a cross of two parental lines, new lines are derived by single-seed descent (SSD) or doubled haploid (DH) technology. However, not all possible crosses between the parental lines can be carried out due to limited resources. Our objectives were to present formulas to characterize a cross by the mean and variance of the genotypic values of the lines derived from the cross, and to apply the formulas to predict means and variances of flowering time traits in recombinant inbred line families of a publicly available data set in maize. We derived formulas which are based on the expected linkage disequilibrium (LD) between two loci and which can be used for arbitrary mating systems. Results were worked out for SSD and DH lines derived from a cross after an arbitrary number of intermating generations. The means and variances were highly correlated with results obtained by the simulation software PopVar. Compared with these simulations, computation time for our closed formulas was about ten times faster. The means and variances for flowering time traits observed in the recombinant inbred line families of the investigated data set showed correlations of around 0.9 for the means and of 0.46 and 0.65 for the standard deviations with the estimated values. We conclude that our results provide a framework that can be exploited to increase the efficiency of hybrid and line breeding programs by extending genomic selection approaches to the selection of crossing partners. PMID:29200436

  10. A deviation display method for visualising data in mobile gamma-ray spectrometry.

    OpenAIRE

    Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Östlund, Karl; Samuelsson, Christer

    2010-01-01

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPG...

  11. A deviation display method for visualising data in mobile gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Kock, Peder, E-mail: Peder.Kock@med.lu.s [Department of Medical Radiation Physics, Clinical Sciences, Lund University, University Hospital, SE-221 85 Lund (Sweden); Finck, Robert R. [Swedish Radiation Protection Authority, SE-171 16 Stockholm (Sweden); Nilsson, Jonas M.C.; Ostlund, Karl; Samuelsson, Christer [Department of Medical Radiation Physics, Clinical Sciences, Lund University, University Hospital, SE-221 85 Lund (Sweden)

    2010-09-15

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded {sup 137}Cs and {sup 241}Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialisation time of about 10 min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.

  12. A deviation display method for visualising data in mobile gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Kock, Peder; Finck, Robert R.; Nilsson, Jonas M.C.; Ostlund, Karl; Samuelsson, Christer

    2010-01-01

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded 137 Cs and 241 Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialisation time of about 10 min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.

  13. Assessment of ulnar variance: a radiological investigation in a Dutch population

    Energy Technology Data Exchange (ETDEWEB)

    Schuurman, A.H. [Dept. of Plastic, Reconstructive and Hand Surgery, University Medical Centre, Utrecht (Netherlands); Dept. of Plastic Surgery, University Medical Centre, Utrecht (Netherlands); Maas, M.; Dijkstra, P.F. [Dept. of Radiology, Univ. of Amsterdam (Netherlands); Kauer, J.M.G. [Dept. of Anatomy and Embryology, Univ. of Nijmegen (Netherlands)

    2001-11-01

    Objective: A radiological study was performed to evaluate ulnar variance in 68 Dutch patients using an electronic digitizer compared with Palmer's concentric circle method. Using the digitizer method only, the effect of different wrist positions and grip on ulnar variance was then investigated. Finally the distribution of ulnar variance in the selected patients was investigated also using the digitizer method. Design and patients: All radiographs were performed with the wrist in a standard zero-rotation position (posteroanterior) and in supination (anteroposterior). Palmer's concentric circle method and an electronic digitizer connected to a personal computer were used to measure ulnar variance. The digitizer consists of a Plexiglas plate with an electronically activated grid beneath it. A radiograph is placed on the plate and a cursor activates a point on the grid. Three plots are marked on the radius and one plot on the most distal part of the ulnar head. The digitizer then determines the difference between a radius passing through the radius plots and the ulnar plot. Results and conclusions: Using the concentric circle method we found an ulna plus predominance, but an ulna minus predominance when using the digitizer method. Overall the ulnar variance distribution for Palmer's method was 41.9% ulna plus, 25.7% neutral and 32.4% ulna minus variance, and for the digitizer method was 40.4% ulna plus, 1.5% neutral and 58.1% ulna minus. The percentage ulnar variance greater than 1 mm on standard radiographs increased from 23% to 58% using the digitizer, with maximum grip, clearly demonstrating the (dynamic) effect of grip on ulnar variance. This almost threefold increase was found to be a significant difference. Significant differences were found between ulnar variance when different wrist positions were compared. (orig.)

  14. 14 CFR 121.360 - Ground proximity warning-glide slope deviation alerting system.

    Science.gov (United States)

    2010-01-01

    ... deviation alerting system. 121.360 Section 121.360 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... Equipment Requirements § 121.360 Ground proximity warning-glide slope deviation alerting system. (a) No... system that meets the performance and environmental standards of TSO-C92 (available from the FAA, 800...

  15. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.

  16. 40 CFR 60.2780 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emissions Guidelines and... the deviation report? In each report required under § 60.2775, for any pollutant or parameter that...

  17. 40 CFR 60.2958 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Operator Training and Qualification Recordkeeping and Reporting § 60.2958 What must I include in the deviation report? In each report...

  18. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  19. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  20. 40 CFR 60.3053 - What must I include in the deviation report?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What must I include in the deviation... PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance... Model Rule-Recordkeeping and Reporting § 60.3053 What must I include in the deviation report? In each...

  1. Prognostic implications of mutation-specific QTc standard deviation in congenital long QT syndrome

    NARCIS (Netherlands)

    Mathias, Andrew; Moss, Arthur J.; Lopes, Coeli M.; Barsheshet, Alon; McNitt, Scott; Zareba, Wojciech; Robinson, Jennifer L.; Locati, Emanuela H.; Ackerman, Michael J.; Benhorin, Jesaia; Kaufman, Elizabeth S.; Platonov, Pyotr G.; Qi, Ming; Shimizu, Wataru; Towbin, Jeffrey A.; Michael Vincent, G.; Wilde, Arthur A. M.; Zhang, Li; Goldenberg, Ilan

    2013-01-01

    Individual corrected QT interval (QTc) may vary widely among carriers of the same long QT syndrome (LQTS) mutation. Currently, neither the mechanism nor the implications of this variable penetrance are well understood. To hypothesize that the assessment of QTc variance in patients with congenital

  2. YOUTH VANDALISM IN THE ENVIRONMENT OF MEGALOPOLIS: BORDERS OF STANDARD AND DEVIATION

    Directory of Open Access Journals (Sweden)

    D. V. Rudenkin

    2018-01-01

    people more or less regularly commit vandal actions, without perceiving them as a deviation from predefined standard pattern of behaviour; young people do not notice vandal behaviour of people around as well.The data obtained point to considerable flexibility and discrepancy of ideas of vandalism among the young population in megalopolises: vandalism is regarded as deviation and categorically condemned at the level of stereotypes with abstraction from the reality; in a daily occurrence, vandalism is treated as unrecognized norm in relation to specific situations. The tendency of the gradual erosion of taboo nature and deviance of vandalism in consciousness of youth is stated.Practical significance. The materials of the research could be applied to optimize the work on up-brining in educational institutions and to increase the effectiveness of prevention of vandalism among young people.

  3. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    Science.gov (United States)

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  4. Research on the preparation, uniformity and stability of mixed standard substance for rapid detection of goat milk composition.

    Science.gov (United States)

    Zhu, Yuying; Wang, Jianmin; Wang, Cunfang

    2018-05-01

    Taking fresh goat milk as raw material after filtering, centrifuging, hollow fiber ultrafiltration, allocating formula, value detection and preparation processing, a set of 10 goat milk mixed standard substances was prepared on the basis of one-factor-at-a-time using a uniform design method, and its accuracy, uniformity and stability were evaluated by paired t-test and F-test of one-way analysis of variance. The results showed that three milk composition contents of these standard products were independent of each other, and the preparation using the quasi-level design method, and without emulsifier was the best program. Compared with detection value by cow milk standards for calibration fast analyzer, the calibration by goat milk mixed standard was more applicable to rapid detection of goat milk composition, detection value was more accurate and the deviation showed less error. Single factor analysis of variance showed that the uniformity and stability of the mixed standard substance were better; it could be stored for 15 days at 4°C. The uniformity and stability of the in-units and inter-units could meet the requirements of the preparation of national standard products. © 2018 Japanese Society of Animal Science.

  5. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Diurnal Dynamics of Standard Deviations of Three Wind Velocity Components in the Atmospheric Boundary Layer

    Science.gov (United States)

    Shamanaeva, L. G.; Krasnenko, N. P.; Kapegesheva, O. F.

    2018-04-01

    Diurnal dynamics of the standard deviation (SD) of three wind velocity components measured with a minisodar in the atmospheric boundary layer is analyzed. Statistical analysis of measurement data demonstrates that the SDs for x- and y-components σx and σy lie in the range from 0.2 to 4 m/s, and σz = 0.1-1.2 m/s. The increase of σx and σy with the altitude is described sufficiently well by a power law with exponent changing from 0.22 to 1.3 depending on time of day, and σz increases by a linear law. Approximation constants are determined and errors of their application are estimated. It is found that the maximal diurnal spread of SD values is 56% for σx and σy and 94% for σz. The established physical laws and the obtained approximation constants allow the diurnal dynamics of the SDs for three wind velocity components in the atmospheric boundary layer to be determined and can be recommended for application in models of the atmospheric boundary layer.

  11. De-trending of wind speed variance based on first-order and second-order statistical moments only

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2014-01-01

    The lack of efficient methods for de-trending of wind speed resource data may lead to erroneous wind turbine fatigue and ultimate load predictions. The present paper presents two models, which quantify the effect of an assumed linear trend on wind speed standard deviations as based on available...... statistical data only. The first model is a pure time series analysis approach, which quantifies the effect of non-stationary characteristics of ensemble mean wind speeds on the estimated wind speed standard deviations as based on mean wind speed statistics only. This model is applicable to statistics...... of arbitrary types of time series. The second model uses the full set of information and includes thus additionally observed wind speed standard deviations to estimate the effect of ensemble mean non-stationarities on wind speed standard deviations. This model takes advantage of a simple physical relationship...

  12. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  13. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  14. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  15. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  16. An explicit local uniform large deviation bound for Brownian bridges

    NARCIS (Netherlands)

    Wittich, O.

    2005-01-01

    By comparing curve length in a manifold and a standard sphere, we prove a local uniform bound for the exponent in the Large Deviation formula that describes the concentration of Brownian bridges to geodesics.

  17. Effect of nasal deviation on quality of life.

    Science.gov (United States)

    de Lima Ramos, Sueli; Hochman, Bernardo; Gomes, Heitor Carvalho; Abla, Luiz Eduardo Felipe; Veiga, Daniela Francescato; Juliano, Yara; Dini, Gal Moreira; Ferreira, Lydia Masako

    2011-07-01

    Nasal deviation is a common complaint in otorhinolaryngology and plastic surgery. This condition not only causes impairment of nasal function but also affects quality of life, leading to psychological distress. The subjective assessment of quality of life, as an important aspect of outcomes research, has received increasing attention in recent decades. Quality of life is measured using standardized questionnaires that have been tested for reliability, validity, and sensitivity. The aim of this study was to evaluate health-related quality of life, self-esteem, and depression in patients with nasal deviation. Sixty patients were selected for the study. Patients with nasal deviation (n = 32) were assigned to the study group, and patients without nasal deviation (n = 28) were assigned to the control group. The diagnosis of nasal deviation was made by digital photogrammetry. Quality of life was assessed using the Medical Outcomes Study 36-Item Short Form Health Survey questionnaire; the Rosenberg Self-Esteem/Federal University of São Paulo, Escola Paulista de Medicina Scale; and the 20-item Self-Report Questionnaire. There were significant differences between groups in the physical functioning and general health subscales of the Medical Outcomes Study 36-Item Short Form Health Survey (p < 0.05). Depression was detected in 11 patients (34.4 percent) in the study group and in two patients in the control group, with a significant difference between groups (p < 0.05). Nasal deviation is an aspect of rhinoplasty of which the surgeon should be aware so that proper psychological diagnosis can be made and suitable treatment can be planned because psychologically the patients with nasal deviation have significantly worse quality of life and are more prone to depression. Risk, II.(Figure is included in full-text article.).

  18. Autoregressive moving average fitting for real standard deviation in Monte Carlo power distribution calculation

    International Nuclear Information System (INIS)

    Ueki, Taro

    2010-01-01

    The noise propagation of tallies in the Monte Carlo power method can be represented by the autoregressive moving average process of orders p and p-1 (ARMA(p,p-1)], where p is an integer larger than or equal to two. The formula of the autocorrelation of ARMA(p,q), p≥q+1, indicates that ARMA(3,2) fitting is equivalent to lumping the eigenmodes of fluctuation propagation in three modes such as the slow, intermediate and fast attenuation modes. Therefore, ARMA(3,2) fitting was applied to the real standard deviation estimation of fuel assemblies at particular heights. The numerical results show that straightforward ARMA(3,2) fitting is promising but a stability issue must be resolved toward the incorporation in the distributed version of production Monte Carlo codes. The same numerical results reveal that the average performance of ARMA(3,2) fitting is equivalent to that of the batch method in MCNP with a batch size larger than one hundred and smaller than two hundred cycles for a 1100 MWe pressurized water reactor. The bias correction of low lag autocovariances in MVP/GMVP is demonstrated to have the potential of improving the average performance of ARMA(3,2) fitting. (author)

  19. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    Science.gov (United States)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Note onset deviations as musical piece signatures.

    Science.gov (United States)

    Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis

    2013-01-01

    A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.

  1. Note onset deviations as musical piece signatures.

    Directory of Open Access Journals (Sweden)

    Joan Serrà

    Full Text Available A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.

  2. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  3. Analysis of ulnar variance as a risk factor for developing scaphoid nonunion.

    Science.gov (United States)

    Lirola-Palmero, S; Salvà-Coll, G; Terrades-Cladera, F J

    2015-01-01

    Ulnar variance may be a risk factor of developing scaphoid non-union. A review was made of the posteroanterior wrist radiographs of 95 patients who were diagnosed of scaphoid fracture. All fractures with displacement less than 1mm treated conservatively were included. The ulnar variance was measured in all patients. Ulnar variance was measured in standard posteroanterior wrist radiographs of 95 patients. Eighteen patients (19%) developed scaphoid nonunion, with a mean value of ulnar variance of -1.34 (-/+ 0.85) mm (CI -2.25 - 0.41). Seventy seven patients (81%) healed correctly, and the mean value of ulnar variance was -0.04 (-/+ 1.85) mm (CI -0.46 - 0.38). A significant difference was observed in the distribution of ulnar variance (pvariance less than -1mm, and ulnar variance greater than -1mm. It appears that patients with ulnar variance less than -1mm had an OR 4.58 (CI 1.51 to 13.89) with pvariance less than -1mm have a greater risk of developing scaphoid nonunion, OR 4.58 (CI 1.51 to 13.89) with p<.007. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  4. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  5. Volumetric segmentation of ADC maps and utility of standard deviation as measure of tumor heterogeneity in soft tissue tumors.

    Science.gov (United States)

    Singer, Adam D; Pattany, Pradip M; Fayad, Laura M; Tresley, Jonathan; Subhawong, Ty K

    2016-01-01

    Determine interobserver concordance of semiautomated three-dimensional volumetric and two-dimensional manual measurements of apparent diffusion coefficient (ADC) values in soft tissue masses (STMs) and explore standard deviation (SD) as a measure of tumor ADC heterogeneity. Concordance correlation coefficients for mean ADC increased with more extensive sampling. Agreement on the SD of tumor ADC values was better for large regions of interest and multislice methods. Correlation between mean and SD ADC was low, suggesting that these parameters are relatively independent. Mean ADC of STMs can be determined by volumetric quantification with high interobserver agreement. STM heterogeneity merits further investigation as a potential imaging biomarker that complements other functional magnetic resonance imaging parameters. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  7. Assessing the stock market volatility for different sectors in Malaysia by using standard deviation and EWMA methods

    Science.gov (United States)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-11-01

    Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.

  8. Bus Travel Time Deviation Analysis Using Automatic Vehicle Location Data and Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Xiaolin Gong

    2015-01-01

    Full Text Available To investigate the influences of causes of unreliability and bus schedule recovery phenomenon on microscopic segment-level travel time variance, this study adopts Structural Equation Modeling (SEM to specify, estimate, and measure the theoretical proposed models. The SEM model establishes and verifies hypotheses for interrelationships among travel time deviations, departure delays, segment lengths, dwell times, and number of traffic signals and access connections. The finally accepted model demonstrates excellent fitness. Most of the hypotheses are supported by the sample dataset from bus Automatic Vehicle Location system. The SEM model confirms the bus schedule recovery phenomenon. The departure delays at bus terminals and upstream travel time deviations indeed have negative impacts on travel time fluctuation of buses en route. Meanwhile, the segment length directly and negatively impacts travel time variability and inversely positively contributes to the schedule recovery process; this exogenous variable also indirectly and positively influences travel times through the existence of signalized intersections and access connections. This study offers a rational approach to analyzing travel time deviation feature. The SEM model structure and estimation results facilitate the understanding of bus service performance characteristics and provide several implications for bus service planning, management, and operation.

  9. Deviation from intention to treat analysis in randomised trials and treatment effect estimates: meta-epidemiological study.

    Science.gov (United States)

    Abraha, Iosief; Cherubini, Antonio; Cozzolino, Francesco; De Florio, Rita; Luchetta, Maria Laura; Rimland, Joseph M; Folletti, Ilenia; Marchesi, Mauro; Germani, Antonella; Orso, Massimiliano; Eusebi, Paolo; Montedori, Alessandro

    2015-05-27

    To examine whether deviation from the standard intention to treat analysis has an influence on treatment effect estimates of randomised trials. Meta-epidemiological study. Medline, via PubMed, searched between 2006 and 2010; 43 systematic reviews of interventions and 310 randomised trials were included. From each year searched, random selection of 5% of intervention reviews with a meta-analysis that included at least one trial that deviated from the standard intention to treat approach. Basic characteristics of the systematic reviews and randomised trials were extracted. Information on the reporting of intention to treat analysis, outcome data, risk of bias items, post-randomisation exclusions, and funding were extracted from each trial. Trials were classified as: ITT (reporting the standard intention to treat approach), mITT (reporting a deviation from the standard approach), and no ITT (reporting no approach). Within each meta-analysis, treatment effects were compared between mITT and ITT trials, and between mITT and no ITT trials. The ratio of odds ratios was calculated (value deviated from the intention to treat analysis showed larger intervention effects than trials that reported the standard approach. Where an intention to treat analysis is impossible to perform, authors should clearly report who is included in the analysis and attempt to perform multiple imputations. © Abraha et al 2015.

  10. Poorer right ventricular systolic function and exercise capacity in women after repair of tetralogy of fallot: a sex comparison of standard deviation scores based on sex-specific reference values in healthy control subjects.

    Science.gov (United States)

    Sarikouch, Samir; Boethig, Dietmar; Peters, Brigitte; Kropf, Siegfried; Dubowy, Karl-Otto; Lange, Peter; Kuehne, Titus; Haverich, Axel; Beerbaum, Philipp

    2013-11-01

    In repaired congenital heart disease, there is increasing evidence of sex differences in cardiac remodeling, but there is a lack of comparable data for specific congenital heart defects such as in repaired tetralogy of Fallot. In a prospective multicenter study, a cohort of 272 contemporary patients (158 men; mean age, 14.3±3.3 years [range, 8-20 years]) with repaired tetralogy of Fallot underwent cardiac magnetic resonance for ventricular function and metabolic exercise testing. All data were transformed to standard deviation scores according to the Lambda-Mu-Sigma method by relating individual values to their respective 50th percentile (standard deviation score, 0) in sex-specific healthy control subjects. No sex differences were observed in age at repair, type of repair conducted, or overall hemodynamic results. Relative to sex-specific controls, repaired tetralogy of Fallot in women had larger right ventricular end-systolic volumes (standard deviation scores: women, 4.35; men, 3.25; P=0.001), lower right ventricular ejection fraction (women, -2.83; men, -2.12; P=0.011), lower right ventricular muscle mass (women, 1.58; men 2.45; P=0.001), poorer peak oxygen uptake (women, -1.65; men, -1.14; Pstandard deviation scores in repaired tetralogy of Fallot suggest that women perform poorer than men in terms of right ventricular systolic function as tested by cardiac magnetic resonance and exercise capacity. This effect cannot be explained by selection bias. Further outcome data are required from longitudinal cohort studies.

  11. Ensemble standar deviation of wind speed and direction of the FDDA input to WRF

    Data.gov (United States)

    U.S. Environmental Protection Agency — NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input. variable U_NDG_OLD contains standard...

  12. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  13. Properties of pattern standard deviation in open-angle glaucoma patients with hemi-optic neuropathy and bi-optic neuropathy.

    Science.gov (United States)

    Heo, Dong Won; Kim, Kyoung Nam; Lee, Min Woo; Lee, Sung Bok; Kim, Chang-Sik

    2017-01-01

    To evaluate the properties of pattern standard deviation (PSD) according to localization of the glaucomatous optic neuropathy. We enrolled 242 eyes of 242 patients with primary open-angle glaucoma, with a best-corrected visual acuity ≥ 20/25, and no media opacity. Patients were examined via dilated fundus photography, spectral-domain optical coherence tomography, and Humphrey visual field examination, and divided into those with hemi-optic neuropathy (superior or inferior) and bi-optic neuropathy (both superior and inferior). We assessed the relationship between mean deviation (MD) and PSD. Using broken stick regression analysis, the tipping point was identified, i.e., the point at which MD became significantly associated with a paradoxical reversal of PSD. In 91 patients with hemi-optic neuropathy, PSD showed a strong correlation with MD (r = -0.973, β = -0.965, p < 0.001). The difference between MD and PSD ("-MD-PSD") was constant (mean, -0.32 dB; 95% confidence interval, -2.48~1.84 dB) regardless of visual field defect severity. However, in 151 patients with bi-optic neuropathy, a negative correlation was evident between "-MD-PSD" and MD (r2 = 0.907, p < 0.001). Overall, the MD tipping point was -14.0 dB, which was close to approximately 50% damage of the entire visual field (p < 0.001). Although a false decrease of PSD usually begins at approximately 50% visual field damage, in patients with hemi-optic neuropathy, the PSD shows no paradoxical decrease and shows a linear correlation with MD.

  14. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  15. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  16. Mean-Variance portfolio optimization when each asset has individual uncertain exit-time

    Directory of Open Access Journals (Sweden)

    Reza Keykhaei

    2016-12-01

    Full Text Available The standard Markowitz Mean-Variance optimization model is a single-period portfolio selection approach where the exit-time (or the time-horizon is deterministic. ‎In this paper we study the Mean-Variance portfolio selection problem ‎with ‎uncertain ‎exit-time ‎when ‎each ‎has ‎individual uncertain ‎xit-time‎, ‎which generalizes the Markowitz's model‎. ‎‎‎‎‎‎We provide some conditions under which the optimal portfolio of the generalized problem is independent of the exit-times distributions. Also, ‎‎it is shown that under some general circumstances, the sets of optimal portfolios‎ ‎in the generalized model and the standard model are the same‎.

  17. Large deviations and idempotent probability

    CERN Document Server

    Puhalskii, Anatolii

    2001-01-01

    In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

  18. Final height in survivors of childhood cancer compared with Height Standard Deviation Scores at diagnosis.

    Science.gov (United States)

    Knijnenburg, S L; Raemaekers, S; van den Berg, H; van Dijk, I W E M; Lieverst, J A; van der Pal, H J; Jaspers, M W M; Caron, H N; Kremer, L C; van Santen, H M

    2013-04-01

    Our study aimed to evaluate final height in a cohort of Dutch childhood cancer survivors (CCS) and assess possible determinants of final height, including height at diagnosis. We calculated standard deviation scores (SDS) for height at initial cancer diagnosis and height in adulthood in a cohort of 573 CCS. Multivariable regression analyses were carried out to estimate the influence of different determinants on height SDS at follow-up. Overall, survivors had a normal height SDS at cancer diagnosis. However, at follow-up in adulthood, 8.9% had a height ≤-2 SDS. Height SDS at diagnosis was an important determinant for adult height SDS. Children treated with (higher doses of) radiotherapy showed significantly reduced final height SDS. Survivors treated with total body irradiation (TBI) and craniospinal radiation had the greatest loss in height (-1.56 and -1.37 SDS, respectively). Younger age at diagnosis contributed negatively to final height. Height at diagnosis was an important determinant for height SDS at follow-up. Survivors treated with TBI, cranial and craniospinal irradiation should be monitored periodically for adequate linear growth, to enable treatment on time if necessary. For correct interpretation of treatment-related late effects studies in CCS, pre-treatment data should always be included.

  19. School Audits and School Improvement: Exploring the Variance Point Concept in Kentucky's... Schools

    Directory of Open Access Journals (Sweden)

    Robert Lyons

    2011-01-01

    Full Text Available As a diagnostic intervention (Bowles, Churchill, Effrat, & McDermott, 2002 for schools failing to meet school improvement goals, Ken-tucky used a scholastic audit process based on nine standards and 88 associated indicators called the Standards and Indicators for School Improvement (SISI. Schools are rated on a scale of 1–4 on each indicator, with a score of 3 considered as fully functional (Kentucky De-partment of Education [KDE], 2002. As part of enacting the legislation, KDE was required to also audit a random sample of schools that did meet school improvement goals; thereby identifying practices present in improving schools that are not present in those failing to improve. These practices were referred to as variance points, and were reported to school leaders annually. Variance points have differed from year to year, and the methodology used by KDE was unclear. Moreover, variance points were reported for all schools without differentiating based upon the level of school (elementary, middle, or high. In this study, we established a transparent methodology for variance point determination that differentiates between elementary, middle, and high schools.

  20. Efficient Scores, Variance Decompositions and Monte Carlo Swindles.

    Science.gov (United States)

    1984-08-28

    to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem

  1. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  2. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  3. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    Science.gov (United States)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  4. Telemetry Standards, RCC Standard 106-17. Chapter 3. Frequency Division Multiplexing Telemetry Standards

    Science.gov (United States)

    2017-07-01

    Standard 106-17 Chapter 3, July 2017 3-5 Table 3-4. Constant-Bandwidth FM Subcarrier Channels Frequency Criteria\\Channels: A B C D E F G H Deviation ...Telemetry Standards , RCC Standard 106-17 Chapter 3, July 2017 3-i CHAPTER 3 Frequency Division Multiplexing Telemetry Standards Acronyms...Frequency Division Multiplexing Telemetry Standards ................................ 3-1 3.1 General

  5. 48 CFR 2001.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2001... Individual deviations. In individual cases, deviations from either the FAR or the NRCAR will be authorized... deviations clearly in the best interest of the Government. Individual deviations must be authorized in...

  6. Approaching nanometre accuracy in measurement of the profile deviation of a large plane mirror

    International Nuclear Information System (INIS)

    Müller, Andreas; Hofmann, Norbert; Manske, Eberhard

    2012-01-01

    The interferometric nanoprofilometer (INP), developed at the Institute of Process Measurement and Sensor Technology at the Ilmenau University of Technology, is a precision device for measuring the profile deviations of plane mirrors with a profile length of up to 250 mm at the nanometre scale. As its expanded uncertainty of U(l) = 7.8 nm at a confidence level of p = 95% (k = 2) was mainly influenced by the uncertainty of the straightness standard (3.6 nm) and the uncertainty caused by the signal and demodulation errors of the interferometer signals (1.2 nm), these two sources of uncertainty have been the subject of recent analyses and modifications. To measure the profile deviation of the standard mirror we performed a classic three-flat test using the INP. The three-flat test consists of a combination of measurements between three different test flats. The shape deviations of the three flats can then be determined by applying a least-squares solution of the resulting equation system. The results of this three-flat test showed surprisingly good consistency, enabling us to correct this systematic error in profile deviation measurements and reducing the uncertainty component of the standard mirror to 0.4 nm. Another area of research is the signal and demodulation error arising during the interpretation of the interferometer signals. In the case of the interferometric nanoprofilometer, the special challenge is that the maximum path length differences are too small during the scan of the entire profile deviation over perfectly aligned 250 mm long mirrors for proper interpolation and correction since they do not yet cover even half of an interference fringe. By applying a simple method of weighting to the interferometer data the common ellipse fitting could be performed successfully and the demodulation error was greatly reduced. The remaining uncertainty component is less than 0.5 nm. In summary we were successful in greatly reducing two major systematic errors. The

  7. 48 CFR 801.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 801... Individual deviations. (a) Authority to authorize individual deviations from the FAR and VAAR is delegated to... nature of the deviation. (d) The DSPE may authorize individual deviations from the FAR and VAAR when an...

  8. Detecting deviating behaviors without models

    NARCIS (Netherlands)

    Lu, X.; Fahland, D.; van den Biggelaar, F.J.H.M.; van der Aalst, W.M.P.; Reichert, M.; Reijers, H.A.

    2016-01-01

    Deviation detection is a set of techniques that identify deviations from normative processes in real process executions. These diagnostics are used to derive recommendations for improving business processes. Existing detection techniques identify deviations either only on the process instance level

  9. 75 FR 6364 - Process for Requesting a Variance From Vegetation Standards for Levees and Floodwalls

    Science.gov (United States)

    2010-02-09

    ..., channels, or shore- line or river-bank protection systems such as revetments, sand dunes, and barrier...) toe (subject to preexisting right-of-way). f. The vegetation variance process is not a mechanism to...

  10. How random is a random vector?

    Science.gov (United States)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  11. Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes

    The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.

  12. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  13. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  14. 48 CFR 1501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1501.403 Section 1501.403 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL GENERAL Deviations 1501.403 Individual deviations. Requests for individual deviations from the FAR and the...

  15. 48 CFR 2401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2401... DEVELOPMENT GENERAL FEDERAL ACQUISITION REGULATION SYSTEM Deviations 2401.403 Individual deviations. In individual cases, proposed deviations from the FAR or HUDAR shall be submitted to the Senior Procurement...

  16. Simple standard problem for the Preisach moving model

    International Nuclear Information System (INIS)

    Morentin, F.J.; Alejos, O.; Francisco, C. de; Munoz, J.M.; Hernandez-Gomez, P.; Torres, C.

    2004-01-01

    The present work proposes a simple magnetic system as a candidate for a Standard Problem for Preisach-based models. The system consists in a regular square array of magnetic particles totally oriented along the direction of application of an external magnetic field. The behavior of such system was numerically simulated for different values of the interaction between particles and of the standard deviation of the critical fields of the particles. The characteristic parameters of the Preisach moving model were worked out during simulations, i.e., the mean value and the standard deviation of the interaction field. For this system, results reveal that the mean interaction field depends linearly on the system magnetization, as the Preisach moving model predicts. Nevertheless, the standard deviation cannot be considered as independent of the magnetization. In fact, the standard deviation shows a maximum at demagnetization and two minima at magnetization saturation. Furthermore, not all the demagnetization states are equivalent. The plot standard deviation vs. magnetization is a multi-valuated curve when the system undergoes an AC demagnetization procedure. In this way, the standard deviation increases as the system goes from coercivity to the AC demagnetized state

  17. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  18. INDICATIVE MODEL OF DEVIATIONS IN PROJECT

    Directory of Open Access Journals (Sweden)

    Олена Борисівна ДАНЧЕНКО

    2016-02-01

    Full Text Available The article shows the process of constructing the project deviations indicator model. It based on a conceptual model of project deviations integrated management (PDIM. During the project different causes (such as risks, changes, problems, crises, conflicts, stress lead to deviations of integrated project indicators - time, cost, quality, and content. For a more detailed definition of where in the project deviations occur and how they are dangerous for the whole project, it needs to develop an indicative model of project deviations. It allows identifying the most dangerous deviations that require PDIM. As a basis for evaluation of project's success has been taken famous model IPMA Delta. During the evaluation, IPMA Delta estimated project management competence of organization in three modules: I-Module ("Individuals" - a self-assessment personnel, P-module ("Projects" - self-assessment of projects and/or programs, and O-module ("Organization" - used to conduct interviews with selected people during auditing company. In the process of building an indicative model of deviations in the project, the first step is the assessment of project management in the organization by IPMA Delta. In the future, built cognitive map and matrix of system interconnections of the project, which conducted simulations and built a scale of deviations for the selected project. They determined a size and place of deviations. To identify the detailed causes of deviations in the project management has been proposed to use the extended system of indicators, which is based on indicators of project management model Project Excellence. The proposed indicative model of deviations in projects allows to estimate the size of variation and more accurately identify the place of negative deviations in the project and provides the project manager information for operational decision making for the management of deviations in the implementation of the project

  19. Quantifying relative importance: Computing standardized effects in models with binary outcomes

    Science.gov (United States)

    Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.

    2018-01-01

    Scientists commonly ask questions about the relative importances of processes, and then turn to statistical models for answers. Standardized coefficients are typically used in such situations, with the goal being to compare effects on a common scale. Traditional approaches to obtaining standardized coefficients were developed with idealized Gaussian variables in mind. When responses are binary, complications arise that impact standardization methods. In this paper, we review, evaluate, and propose new methods for standardizing coefficients from models that contain binary outcomes. We first consider the interpretability of unstandardized coefficients and then examine two main approaches to standardization. One approach, which we refer to as the Latent-Theoretical or LT method, assumes that underlying binary observations there exists a latent, continuous propensity linearly related to the coefficients. A second approach, which we refer to as the Observed-Empirical or OE method, assumes responses are purely discrete and estimates error variance empirically via reference to a classical R2 estimator. We also evaluate the standard formula for calculating standardized coefficients based on standard deviations. Criticisms of this practice have been persistent, leading us to propose an alternative formula that is based on user-defined “relevant ranges”. Finally, we implement all of the above in an open-source package for the statistical software R.

  20. 48 CFR 1301.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... DEPARTMENT OF COMMERCE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 1301.403 Individual deviations. The designee authorized to approve individual deviations from the FAR is set forth in CAM 1301.70. ...

  1. 48 CFR 301.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 301... ACQUISITION REGULATION SYSTEM Deviations From the FAR 301.403 Individual deviations. Contracting activities shall prepare requests for individual deviations to either the FAR or HHSAR in accordance with 301.470. ...

  2. 48 CFR 1201.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... FEDERAL ACQUISITION REGULATIONS SYSTEM 70-Deviations From the FAR and TAR 1201.403 Individual deviations... Executive Service (SES) official or that of a Flag Officer, may authorize individual deviations (unless (FAR...

  3. Energy and variance budgets of a diffusive staircase with implications for heat flux scaling

    Science.gov (United States)

    Hieronymus, M.; Carpenter, J. R.

    2016-02-01

    Diffusive convection, the mode of double-diffusive convection that occur when both temperature and salinity increase with increasing depth, is commonplace throughout the high latitude oceans and diffusive staircases constitute an important heat transport process in the Arctic Ocean. Heat and buoyancy fluxes through these staircases are often estimated using flux laws deduced either from laboratory experiments, or from simplified energy or variance budgets. We have done direct numerical simulations of double-diffusive convection at a range of Rayleigh numbers and quantified the energy and variance budgets in detail. This allows us to compare the fluxes in our simulations to those derived using known flux laws and to quantify how well the simplified energy and variance budgets approximate the full budgets. The fluxes are found to agree well with earlier estimates at high Rayleigh numbers, but we find large deviations at low Rayleigh numbers. The close ties between the heat and buoyancy fluxes and the budgets of thermal variance and energy have been utilized to derive heat flux scaling laws in the field of thermal convection. The result is the so called GL-theory, which has been found to give accurate heat flux scaling laws in a very wide parameter range. Diffusive convection has many similarities to thermal convection and an extension of the GL-theory to diffusive convection is also presented and its predictions are compared to the results from our numerical simulations.

  4. 48 CFR 501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 501... Individual deviations. (a) An individual deviation affects only one contract action. (1) The Head of the Contracting Activity (HCA) must approve an individual deviation to the FAR. The authority to grant an...

  5. 48 CFR 401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 401... AGRICULTURE ACQUISITION REGULATION SYSTEM Deviations From the FAR and AGAR 401.403 Individual deviations. In individual cases, deviations from either the FAR or the AGAR will be authorized only when essential to effect...

  6. 48 CFR 2801.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2801... OF JUSTICE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR and JAR 2801.403 Individual deviations. Individual deviations from the FAR or the JAR shall be approved by the head of the contracting...

  7. Allan deviation analysis of financial return series

    Science.gov (United States)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  8. [Study on physical deviation factors on laser induced breakdown spectroscopy measurement].

    Science.gov (United States)

    Wan, Xiong; Wang, Peng; Wang, Qi; Zhang, Qing; Zhang, Zhi-Min; Zhang, Hua-Ming

    2013-10-01

    In order to eliminate the deviation between the measured LIBS spectral line and the standard LIBS spectral line, and improve the accuracy of elements measurement, a research of physical deviation factors in laser induced breakdown spectroscopy technology was proposed. Under the same experimental conditions, the relationship of ablated hole effect and spectral wavelength was tested, the Stark broadening data of Mg plasma laser induced breakdown spectroscopy with sampling time-delay from 1.00 to 3.00 micros was also studied, thus the physical deviation influences such as ablated hole effect and Stark broadening could be obtained while collecting the spectrum. The results and the method of the research and analysis can also be applied to other laser induced breakdown spectroscopy experiment system, which is of great significance to improve the accuracy of LIBS elements measuring and is also important to the research on the optimum sampling time-delay of LIBS.

  9. Large-deviation theory for diluted Wishart random matrices

    Science.gov (United States)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  10. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  11. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  12. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  13. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  14. Mean-Variance Efficiency of the Market Portfolio

    OpenAIRE

    Rafael Falcão Noda; Roy Martelanc; José Roberto Securato

    2014-01-01

    The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i) the efficient frontier intersects with the market portfolio and (ii) the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the a...

  15. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    Science.gov (United States)

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being

  16. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.

    Science.gov (United States)

    Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.

  17. 48 CFR 3401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations. 3401.403 Section 3401.403 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION ACQUISITION REGULATION GENERAL ED ACQUISITION REGULATION SYSTEM Deviations 3401.403 Individual deviations. An individual...

  18. 48 CFR 1.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Individual deviations. 1.403 Section 1.403 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.403 Individual deviations. Individual...

  19. 48 CFR 2501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2501.403 Section 2501.403 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 2501.403 Individual deviations. Individual...

  20. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  1. A random variance model for detection of differential gene expression in small microarray experiments.

    Science.gov (United States)

    Wright, George W; Simon, Richard M

    2003-12-12

    Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf

  2. 48 CFR 601.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 601.403 Section 601.403 Federal Acquisition Regulations System DEPARTMENT OF STATE GENERAL DEPARTMENT OF STATE ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 601.403 Individual deviations. The...

  3. Statistical properties of the deviations of f 0 F 2 from monthly medians

    Directory of Open Access Journals (Sweden)

    Y. Tulunay

    2002-06-01

    Full Text Available The deviations of hourly f 0 F 2 from monthly medians for 20 stations in Europe during the period 1958-1998 are studied. Spectral analysis is used to show that, both for original data (for each hour and for the deviations from monthly medians, the deterministic components are the harmonics of 11 years (solar cycle, 1 year and its harmonics, 27 days and 12 h 50.49 m (2nd harmonic of lunar rotation period L 2 periodicities. Using histograms for one year samples, it is shown that the deviations from monthly medians are nearly zero mean (mean < 0.5 and approximately Gaussian (relative difference range between %10 to %20 and their standard deviations are larger for daylight hours (in the range 5-7. It is shown that the amplitude distribution of the positive and negative deviations is nearly symmetrical at night hours, but asymmetrical for day hours. The positive and negative deviations are then studied separately and it is observed that the positive deviations are nearly independent of R12 except for high latitudes, but negative deviations are modulated by R12 . The 90% confidence interval for negative deviations for each station and each hour is computed as a linear model in terms of R12. After correction for local time, it is shown that for all hours the confidence intervals increase with latitude but decrease above 60N. Long-term trend analysis showed that there is an increase in the amplitude of positive deviations from monthly means irrespective of the solar conditions. Using spectral analysis it is also shown that the seasonal dependency of negative deviations is more accentuated than the seasonal dependency of positive deviations especially at low latitudes. In certain stations, it is also observed that the 4th harmonic of 1 year corresponding to a periodicity of 3 months, which is missing in f 0 F 2 data, appears in the spectra of negative variations.

  4. 48 CFR 201.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Individual deviations. 201.403 Section 201.403 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Individual deviations. (1) Individual deviations, except those described in 201.402(1) and paragraph (2) of...

  5. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    Science.gov (United States)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    where, and to what extent, the burrow tubes deviate from the sediment matrix. Future research will correlate changes in variance due to bioturbation to other features indicating ocean temperatures and nutrient flux, such as foraminifera counts and oxygen isotope data.

  6. 48 CFR 3001.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations... from the FAR and HSAR 3001.403 Individual deviations. Unless precluded by law, executive order, or other regulation, the HCA is authorized to approve individual deviation (except with respect to (FAR) 48...

  7. 48 CFR 1901.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1901.403 Section 1901.403 Federal Acquisition Regulations System BROADCASTING BOARD OF GOVERNORS GENERAL... Individual deviations. Deviations from the IAAR or the FAR in individual cases shall be authorized by the...

  8. Deviation Management: Key Management Subsystem Driver of Knowledge-Based Continuous Improvement in the Henry Ford Production System.

    Science.gov (United States)

    Zarbo, Richard J; Copeland, Jacqueline R; Varney, Ruan C

    2017-10-01

    To develop a business subsystem fulfilling International Organization for Standardization 15189 nonconformance management regulatory standard, facilitating employee engagement in problem identification and resolution to effect quality improvement and risk mitigation. From 2012 to 2016, the integrated laboratories of the Henry Ford Health System used a quality technical team to develop and improve a management subsystem designed to identify, track, trend, and summarize nonconformances based on frequency, risk, and root cause for elimination at the level of the work. Programmatic improvements and training resulted in markedly increased documentation culminating in 71,641 deviations in 2016 classified by a taxonomy of 281 defect types into preanalytic (74.8%), analytic (23.6%), and postanalytic (1.6%) testing phases. The top 10 deviations accounted for 55,843 (78%) of the total. Deviation management is a key subsystem of managers' standard work whereby knowledge of nonconformities assists in directing corrective actions and continuous improvements that promote consistent execution and higher levels of performance. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  9. 40 CFR 60.2215 - What else must I report if I have a deviation from the operating limits or the emission limitations?

    Science.gov (United States)

    2010-07-01

    ... performance test was conducted that deviated from any emission limitation. (b) The deviation report must be... deviation from the operating limits or the emission limitations? 60.2215 Section 60.2215 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR...

  10. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  11. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  12. 49 CFR 192.943 - When can an operator deviate from these reassessment intervals?

    Science.gov (United States)

    2010-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.943 When can an operator deviate from these reassessment...

  13. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  14. Baseline mean deviation and rates of visual field change in treated glaucoma patients.

    Science.gov (United States)

    Forchheimer, I; de Moraes, C G; Teng, C C; Folgar, F; Tello, C; Ritch, R; Liebmann, J M

    2011-05-01

    To evaluate the relationships between baseline visual field (VF) mean deviation (MD) and subsequent progression in treated glaucoma. Records of patients seen in a glaucoma practice between 1999 and 2009 were reviewed. Patients with glaucomatous optic neuropathy, baseline VF damage, and ≥8 SITA-standard 24-2 VF were included. Patients were divided into tertiles based upon baseline MD. Automated pointwise linear regression determined global and localized rates (decibels (dB) per year) of change. Progression was defined when two or more adjacent test locations in the same hemifield showed a sensitivity decline at a rate of >1.0  dB per year, P0.50) and global rates of VF change of progressing eyes were -1.3±1.2, -1.01±0.7, and -0.9±0.5 dB/year (P=0.09, analysis of variance). Within these groups, intraocular pressure (IOP) in stable vs progressing eyes were 15.5±3.3 vs 17.0±3.1 (P0.50) and multivariate (P=0.26) analyses adjusting for differences in follow-up IOP. After correcting for differences in IOP in treated glaucoma patients, we did not find a relationship between the rate of VF change (dB per year) and the severity of the baseline VF MD. This finding may have been due to more aggressive IOP lowering in eyes with more severe disease. Eyes with lower IOP progressed less frequently across the spectrum of VF loss.

  15. Computer generation of random deviates

    International Nuclear Information System (INIS)

    Cormack, John

    1991-01-01

    The need for random deviates arises in many scientific applications. In medical physics, Monte Carlo simulations have been used in radiology, radiation therapy and nuclear medicine. Specific instances include the modelling of x-ray scattering processes and the addition of random noise to images or curves in order to assess the effects of various processing procedures. Reliable sources of random deviates with statistical properties indistinguishable from true random deviates are a fundamental necessity for such tasks. This paper provides a review of computer algorithms which can be used to generate uniform random deviates and other distributions of interest to medical physicists, along with a few caveats relating to various problems and pitfalls which can occur. Source code listings for the generators discussed (in FORTRAN, Turbo-PASCAL and Data General ASSEMBLER) are available on request from the authors. 27 refs., 3 tabs., 5 figs

  16. Estimates for Genetic Variance Components in Reciprocal Recurrent Selection in Populations Derived from Maize Single-Cross Hybrids

    Directory of Open Access Journals (Sweden)

    Matheus Costa dos Reis

    2014-01-01

    Full Text Available This study was carried out to obtain the estimates of genetic variance and covariance components related to intra- and interpopulation in the original populations (C0 and in the third cycle (C3 of reciprocal recurrent selection (RRS which allows breeders to define the best breeding strategy. For that purpose, the half-sib progenies of intrapopulation (P11 and P22 and interpopulation (P12 and P21 from populations 1 and 2 derived from single-cross hybrids in the 0 and 3 cycles of the reciprocal recurrent selection program were used. The intra- and interpopulation progenies were evaluated in a 10×10 triple lattice design in two separate locations. The data for unhusked ear weight (ear weight without husk and plant height were collected. All genetic variance and covariance components were estimated from the expected mean squares. The breakdown of additive variance into intrapopulation and interpopulation additive deviations (στ2 and the covariance between these and their intrapopulation additive effects (CovAτ found predominance of the dominance effect for unhusked ear weight. Plant height for these components shows that the intrapopulation additive effect explains most of the variation. Estimates for intrapopulation and interpopulation additive genetic variances confirm that populations derived from single-cross hybrids have potential for recurrent selection programs.

  17. A Framework for Establishing Standard Reference Scale of Texture by Multivariate Statistical Analysis Based on Instrumental Measurement and Sensory Evaluation.

    Science.gov (United States)

    Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye

    2016-01-13

    A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.

  18. Large deviation estimates for a Non-Markovian Lévy generator of big order

    International Nuclear Information System (INIS)

    Léandre, Rémi

    2015-01-01

    We give large deviation estimates for a non-markovian convolution semi-group with a non-local generator of Lévy type of big order and with the standard normalisation of semi-classical analysis. No stochastic process is associated to this semi-group. (paper)

  19. Surgical Success Rates for Horizontal Concomitant Deviations According to the Type and Degree of Deviation

    Directory of Open Access Journals (Sweden)

    İhsan Çaça

    2004-01-01

    Full Text Available We evaluated the correlation with success rates and deviation type and degree inhorizontal concomitant deviations. 104 horizontal concomitan strabismus cases whowere operated in our clinic between January 1994 – December 2000 were included in thestudy. 56 cases undergone recession-resection procedure in the same eye 19 cases twomuscle recession and one muscle resection, 20 cases two muscle recession, 9 cases onlyone muscle recession. 10 ± prism diopter deviation in postoperative sixth monthexamination was accepted as surgical success.Surgical success rate was 90% and 89.3% in the cases with deviation angle of 15-30and 31-50 prism diopter respectively. Success rate was 78.9% if the angle was more than50 prism diopter. According to strabismus type when surgical success rate examined; inalternan esotropia 88.33%, in alternan exotropia 84.6%, in monocular esotropia 88%and in monocular exotropia 83.3% success was fixed. Statistically significant differencewas not found between strabismus type and surgical success rate. The binocular visiongaining rate was found as 51.8% after the treatment of cases.In strabismus surgery, preoperative deviation angle was found to be an effectivefactor on the success rate.

  20. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  1. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma.

    Science.gov (United States)

    Kothari, Ruchi; Bokariya, Pradeep; Singh, Ramji; Singh, Smita; Narang, Purvasha

    2014-01-01

    To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD) of Humphrey visual field could be associated with visual evoked potential (VEP) parameters of patients having primary open angle glaucoma (POAG). Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP) were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field) and displayed on VEP monitor (colour 14″) by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II). The results of our study indicate that there is a highly significant (P<0.001) negative correlation of P100 amplitude and a statistically significant (P<0.05) positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student's t-test. Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.

  2. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  3. 40 CFR 260.32 - Variances to be classified as a boiler.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Variances to be classified as a boiler... be classified as a boiler. In accordance with the standards and criteria in § 260.10 (definition of “boiler”), and the procedures in § 260.33, the Administrator may determine on a case-by-case basis that...

  4. Entanglement transitions induced by large deviations

    Science.gov (United States)

    Bhosale, Udaysinh T.

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  5. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  6. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  7. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  8. Comparison of setup deviations for two thermoplastic immobilization masks in glottis cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jae Hong [Dept. of Biomedical Engineering, College of Medicine, The Catholic University, Seoul (Korea, Republic of)

    2017-03-15

    The purpose of this study was compare to the patient setup deviation of two different type thermoplastic immobilization masks for glottis cancer in the intensity-modulated radiation therapy (IMRT). A total of 16 glottis cancer cases were divided into two groups based on applied mask type: standard or alternative group. The mean error (M), three-dimensional setup displacement error (3D-error), systematic error (Σ), random error (σ) were calculated for each group, and also analyzed setup margin (mm). The 3D-errors were 5.2 ± 1.3 mm and 5.9 ± 0.7 mm for the standard and alternative groups, respectively; the alternative group was 13.6% higher than the standard group. The systematic errors in the roll angle and the x, y, z directions were 0.8°, 1.7 mm, 1.0 mm, and 1.5 mm in the alternative group and 0.8°, 1.1 mm, 1.8 mm, and 2.0 mm in the alternative group. The random errors in the x, y, z directions were 10.9%, 1.7%, and 23.1% lower in the alternative group than in the standard group. However, absolute rotational angle (i.e., roll) in the alternative group was 12.4% higher than in the standard group. For calculated setup margin, the alternative group in x direction was 31.8% lower than in standard group. In contrast, the y and z direction were 52.6% and 21.6% higher than in the standard group. Although using a modified thermoplastic immobilization mask could be affect patient setup deviation in terms of numerical results, various point of view for an immobilization masks has need to research in terms of clinic issue.

  9. 22 CFR 226.4 - Deviations.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Deviations. 226.4 Section 226.4 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS General § 226.4 Deviations. The Office of Management and Budget (OMB) may grant exceptions for...

  10. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    Science.gov (United States)

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  11. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  12. Analysis of covariance with pre-treatment measurements in randomized trials under the cases that covariances and post-treatment variances differ between groups.

    Science.gov (United States)

    Funatogawa, Takashi; Funatogawa, Ikuko; Shyr, Yu

    2011-05-01

    When primary endpoints of randomized trials are continuous variables, the analysis of covariance (ANCOVA) with pre-treatment measurements as a covariate is often used to compare two treatment groups. In the ANCOVA, equal slopes (coefficients of pre-treatment measurements) and equal residual variances are commonly assumed. However, random allocation guarantees only equal variances of pre-treatment measurements. Unequal covariances and variances of post-treatment measurements indicate unequal slopes and, usually, unequal residual variances. For non-normal data with unequal covariances and variances of post-treatment measurements, it is known that the ANCOVA with equal slopes and equal variances using an ordinary least-squares method provides an asymptotically normal estimator for the treatment effect. However, the asymptotic variance of the estimator differs from the variance estimated from a standard formula, and its property is unclear. Furthermore, the asymptotic properties of the ANCOVA with equal slopes and unequal variances using a generalized least-squares method are unclear. In this paper, we consider non-normal data with unequal covariances and variances of post-treatment measurements, and examine the asymptotic properties of the ANCOVA with equal slopes using the variance estimated from a standard formula. Analytically, we show that the actual type I error rate, thus the coverage, of the ANCOVA with equal variances is asymptotically at a nominal level under equal sample sizes. That of the ANCOVA with unequal variances using a generalized least-squares method is asymptotically at a nominal level, even under unequal sample sizes. In conclusion, the ANCOVA with equal slopes can be asymptotically justified under random allocation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Improving precision in gel electrophoresis by stepwisely decreasing variance components.

    Science.gov (United States)

    Schröder, Simone; Brandmüller, Asita; Deng, Xi; Ahmed, Aftab; Wätzig, Hermann

    2009-10-15

    Many methods have been developed in order to increase selectivity and sensitivity in proteome research. However, gel electrophoresis (GE) which is one of the major techniques in this area, is still known for its often unsatisfactory precision. Percental relative standard deviations (RSD%) up to 60% have been reported. In this case the improvement of precision and sensitivity is absolutely essential, particularly for the quality control of biopharmaceuticals. Our work reflects the remarkable and completely irregular changes of the background signal from gel to gel. This irregularity was identified as one of the governing error sources. These background changes can be strongly reduced by using a signal detection in the near-infrared (NIR) range. This particular detection method provides the most sensitive approach for conventional CCB (Colloidal Coomassie Blue) stained gels, which is reflected in a total error of just 5% (RSD%). In order to further investigate variance components in GE, an experimental Plackett-Burman screening design was performed. The influence of seven potential factors on the precision was investigated using 10 proteins with different properties analyzed by NIR detection. The results emphasized the individuality of the proteins. Completely different factors were identified to be significant for each protein. However, out of seven investigated parameters, just four showed a significant effect on some proteins, namely the parameters of: destaining time, staining temperature, changes of detergent additives (SDS and LDS) in the sample buffer, and the age of the gels. As a result, precision can only be improved individually for each protein or protein classes. Further understanding of the unique properties of proteins should enable us to improve the precision in gel electrophoresis.

  14. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  15. Deviations from thermal equilibrium in plasmas

    International Nuclear Information System (INIS)

    Burm, K.T.A.L.

    2004-01-01

    A plasma system in local thermal equilibrium can usually be described with only two parameters. To describe deviations from equilibrium two extra parameters are needed. However, it will be shown that deviations from temperature equilibrium and deviations from Saha equilibrium depend on one another. As a result, non-equilibrium plasmas can be described with three parameters. This reduction in parameter space will ease the plasma describing effort enormously

  16. Same Traits, Different Variance

    Directory of Open Access Journals (Sweden)

    Jamie S. Churchyard

    2014-02-01

    Full Text Available Personality trait questionnaires are regularly used in individual differences research to examine personality scores between participants, although trait researchers tend to place little value on intra-individual variation in item ratings within a measured trait. The few studies that examine variability indices have not considered how they are related to a selection of psychological outcomes, so we recruited 160 participants (age M = 24.16, SD = 9.54 who completed the IPIP-HEXACO personality questionnaire and several outcome measures. Heterogenous within-subject differences in item ratings were found for every trait/facet measured, with measurement error that remained stable across the questionnaire. Within-subject standard deviations, calculated as measures of individual variation in specific item ratings within a trait/facet, were related to outcomes including life satisfaction and depression. This suggests these indices represent valid constructs of variability, and that researchers administering behavior statement trait questionnaires with outcome measures should also apply item-level variability indices.

  17. Deviating measurements in radiation protection. Legal assessment of deviations in radiation protection measurements

    International Nuclear Information System (INIS)

    Hoegl, A.

    1996-01-01

    This study investigates how, from a legal point of view, deviations in radiation protection measurements should be treated in comparisons between measured results and limits stipulated by nuclear legislation or goods transport regulations. A case-by-case distinction is proposed which is based on the legal concequences of the respective measurement. Commentaries on nuclear law contain no references to the legal assessment of deviating measurements in radiation protection. The examples quoted in legal commentaries on civil and criminal proceedings of the way in which errors made in measurements for speed control and determinations of the alcohol content in the blood are to be taken into account, and a commentary on ozone legislation, are examined for analogies with radiation protection measurements. Leading cases in the nuclear field are evaluated in the light of the requirements applying in case of deviations in measurements. The final section summarizes the most important findings and conclusions. (orig.) [de

  18. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    Science.gov (United States)

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  19. A generalized Levene's scale test for variance heterogeneity in the presence of sample correlation and group uncertainty.

    Science.gov (United States)

    Soave, David; Sun, Lei

    2017-09-01

    We generalize Levene's test for variance (scale) heterogeneity between k groups for more complex data, when there are sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic χk-12/(k-1) distribution of the generalized scale (gS) test statistic. We then show that the proposed gS test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale (gJLS) test, valuable in settings where there is an interaction effect but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study and two genetic association application studies. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  20. Changes in deviation of absorbed dose to water among users by chamber calibration shift.

    Science.gov (United States)

    Katayose, Tetsurou; Saitoh, Hidetoshi; Igari, Mitsunobu; Chang, Weishan; Hashimoto, Shimpei; Morioka, Mie

    2017-07-01

    The JSMP01 dosimetry protocol had adopted the provisional 60 Co calibration coefficient [Formula: see text], namely, the product of exposure calibration coefficient N C and conversion coefficient k D,X . After that, the absorbed dose to water D w  standard was established, and the JSMP12 protocol adopted the [Formula: see text] calibration. In this study, the influence of the calibration shift on the measurement of D w among users was analyzed. The intercomparison of the D w using an ionization chamber was annually performed by visiting related hospitals. Intercomparison results before and after the calibration shift were analyzed, the deviation of D w among users was re-evaluated, and the cause of deviation was estimated. As a result, the stability of LINAC, calibration of the thermometer and barometer, and collection method of ion recombination were confirmed. The statistical significance of standard deviation of D w was not observed, but that of difference of D w among users was observed between N C and [Formula: see text] calibration. Uncertainty due to chamber-to-chamber variation was reduced by the calibration shift, consequently reducing the uncertainty among users regarding D w . The result also pointed out uncertainty might be reduced by accurate and detailed instructions on the setup of an ionization chamber.

  1. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  2. New reference charts for testicular volume in Dutch children and adolescents allow the calculation of standard deviation scores.

    Science.gov (United States)

    Joustra, Sjoerd D; van der Plas, Evelyn M; Goede, Joery; Oostdijk, Wilma; Delemarre-van de Waal, Henriette A; Hack, Wilfried W M; van Buuren, Stef; Wit, Jan M

    2015-06-01

    Accurate calculations of testicular volume standard deviation (SD) scores are not currently available. We constructed LMS-smoothed age-reference charts for testicular volume in healthy boys. The LMS method was used to calculate reference data, based on testicular volumes from ultrasonography and Prader orchidometer of 769 healthy Dutch boys aged 6 months to 19 years. We also explored the association between testicular growth and pubic hair development, and data were compared to orchidometric testicular volumes from the 1997 Dutch nationwide growth study. The LMS-smoothed reference charts showed that no revision of the definition of normal onset of male puberty - from nine to 14 years of age - was warranted. In healthy boys, the pubic hair stage SD scores corresponded with testicular volume SD scores (r = 0.394). However, testes were relatively small for pubic hair stage in Klinefelter's syndrome and relatively large in immunoglobulin superfamily member 1 deficiency syndrome. The age-corrected SD scores for testicular volume will aid in the diagnosis and follow-up of abnormalities in the timing and progression of male puberty and in research evaluations. The SD scores can be compared with pubic hair SD scores to identify discrepancies between cell functions that result in relative microorchidism or macroorchidism. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  3. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma

    Directory of Open Access Journals (Sweden)

    Ruchi Kothari

    2014-04-01

    Full Text Available AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD of Humphrey visual field could be associated with visual evoked potential (VEP parameters of patients having primary open angle glaucoma (POAG.METHODS:Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field and displayed on VEP monitor (colour 14” by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II.RESULTS:The results of our study indicate that there is a highly significant (P<0.001 negative correlation of P100 amplitude and a statistically significant (P<0.05 positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student’s t-test.CONCLUSION:Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.

  4. Periodic-orbit theory of the number variance Σ2(L) of strongly chaotic systems

    International Nuclear Information System (INIS)

    Aurich, R.; Steiner, F.

    1994-03-01

    We discuss the number variance Σ 2 (L) and the spectral form factor F(τ) of the energy levels of bound quantum systems whose classical counterparts are strongly chaotic. Exact periodic-orbit representations of Σ 2 (L) and F(τ) are derived which explain the breakdown of universality, i.e., the deviations from the predictions of random-matrix theory. The relation of the exact spectral form factor F(τ) to the commonly used approximation K(τ) is clarified. As an illustration the periodic-orbit representations are tested in the case of a strongly chaotic system at low and high energies including very long-range correlations up to L=700. Good agreement between 'experimental' data and theory is obtained. (orig.)

  5. Feynman variance for neutrons emitted from photo-fission initiated fission chains - a systematic simulation for selected speacal nuclear materials

    Energy Technology Data Exchange (ETDEWEB)

    Soltz, R. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Danagoulian, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sheets, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Korbly, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hartouni, E. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-05-22

    Theoretical calculations indicate that the value of the Feynman variance, Y2F for the emitted distribution of neutrons from ssionable exhibits a strong monotonic de- pendence on a the multiplication, M, of a quantity of special nuclear material. In 2012 we performed a series of measurements at the Passport Inc. facility using a 9- MeV bremsstrahlung CW beam of photons incident on small quantities of uranium with liquid scintillator detectors. For the set of objects studies we observed deviations in the expected monotonic dependence, and these deviations were later con rmed by MCNP simulations. In this report, we modify the theory to account for the contri- bution from the initial photo- ssion and benchmark the new theory with a series of MCNP simulations on DU, LEU, and HEU objects spanning a wide range of masses and multiplication values.

  6. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  7. 9 CFR 318.308 - Deviations in processing.

    Science.gov (United States)

    2010-01-01

    ...) Deviations in processing (or process deviations) must be handled according to: (1)(i) A HACCP plan for canned...) of this section. (c) [Reserved] (d) Procedures for handling process deviations where the HACCP plan... accordance with the following procedures: (a) Emergency stops. (1) When retort jams or breakdowns occur...

  8. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  9. Improved differentiation between hepatic hemangioma and metastases on diffusion-weighted MRI by measurement of standard deviation of apparent diffusion coefficient.

    Science.gov (United States)

    Hardie, Andrew D; Egbert, Robert E; Rissing, Michael S

    2015-01-01

    Diffusion-weighted magnetic resonance imaging (DW-MR) can be useful in the differentiation of hemangiomata from liver metastasis, but improved methods other than by mean apparent diffusion coefficient (mADC) are needed. A retrospective review identified 109 metastatic liver lesions and 86 hemangiomata in 128 patients who had undergone DW-MR. For each lesion, mADC and the standard deviation of the mean ADC (sdADC) were recorded and compared by receiver operating characteristic analysis. Mean mADC was higher in benign hemangiomata (1.52±0.12 mm(2)/s) than in liver metastases (1.33±0.18 mm(2)/s), but there was significant overlap in values. The mean sdADC was lower in hemangiomata (101±17 mm(2)/s) than metastases (245±25 mm(2)/s) and demonstrated no overlap in values, which was significantly different (P<.0001). Hemangiomata may be better able to be differentiated from liver metastases on the basis of sdADC than by mADC, although further studies are needed. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Intercentre variance in patient reported outcomes is lower than objective rheumatoid arthritis activity measures

    DEFF Research Database (Denmark)

    Khan, Nasim Ahmed; Spencer, Horace Jack; Nikiphorou, Elena

    2017-01-01

    Objective: To assess intercentre variability in the ACR core set measures, DAS28 based on three variables (DAS28v3) and Routine Assessment of Patient Index Data 3 in a multinational study. Methods: Seven thousand and twenty-three patients were recruited (84 centres; 30 countries) using a standard...... built to adjust for the remaining ACR core set measure (for each ACR core set measure or each composite index), socio-demographics and medical characteristics. ANOVA and analysis of covariance models yielded similar results, and ANOVA tables were used to present variance attributable to recruiting...... centre. Results: The proportion of variances attributable to recruiting centre was lower for patient reported outcomes (PROs: pain, HAQ, patient global) compared with objective measures (joint counts, ESR, physician global) in all models. In the full model, variance in PROs attributable to recruiting...

  11. Flagged uniform particle splitting for variance reduction in proton and carbon ion track-structure simulations

    Science.gov (United States)

    Ramos-Méndez, José; Schuemann, Jan; Incerti, Sebastien; Paganetti, Harald; Schulte, Reinhard; Faddegon, Bruce

    2017-08-01

    deviations) for endpoints (1) and (2), within 2% (1 standard deviation) for endpoint (3). In conclusion, standard particle splitting variance reduction techniques can be successfully implemented in Monte Carlo track structure codes.

  12. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    Science.gov (United States)

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  13. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  14. Analysis of standard substance human hair

    International Nuclear Information System (INIS)

    Zou Shuyun; Zhang Yongbao

    2005-01-01

    The human hair samples as standard substances were analyzed by the neutron activation analysis (NAA) on the miniature neutron source reactor. 19 elements, i.e. Al, As, Ba, Br, Ca, Cl, Cr, Co, Cu, Fe, Hg, I, Mg, Mn, Na, S, Se, V and Zn, were measured. The average content, standard deviation, relative standard deviation and the detection limit under the present research conditions were given for each element, and the results showed that the measured values of the samples were in agreement with the recommended values, which indicated that NAA can be used to analyze standard substance human hair with a relatively high accuracy. (authors)

  15. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  16. Large deviations for noninteracting infinite-particle systems

    International Nuclear Information System (INIS)

    Donsker, M.D.; Varadhan, S.R.S.

    1987-01-01

    A large deviation property is established for noninteracting infinite particle systems. Previous large deviation results obtained by the authors involved a single I-function because the cases treated always involved a unique invariant measure for the process. In the context of this paper there is an infinite family of invariant measures and a corresponding infinite family of I-functions governing the large deviations

  17. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  18. A CORRECTION TO THE STANDARD GALACTIC REDDENING MAP: PASSIVE GALAXIES AS STANDARD CRAYONS

    International Nuclear Information System (INIS)

    Peek, J. E. G.; Graves, Genevieve J.

    2010-01-01

    We present corrections to the Schlegel et al. (SFD98) reddening maps over the Sloan Digital Sky Survey (SDSS) northern Galactic cap area. To find these corrections, we employ what we call the 'standard crayon' method, in which we use passively evolving galaxies as color standards to measure deviations from the reddening map. We select these passively evolving galaxies spectroscopically, using limits on the Hα and [O II] equivalent widths to remove all star-forming galaxies from the SDSS main galaxy catalog. We find that by correcting for known reddening, redshift, color-magnitude relation, and variation of color with environmental density, we can reduce the scatter in color to below 3% in the bulk of the 151,637 galaxies that we select. Using these galaxies, we construct maps of the deviation from the SFD98 reddening map at 4. 0 5 resolution, with 1σ error of ∼1.5 mmag E(B - V). We find that the SFD98 maps are largely accurate with most of the map having deviations below 3 mmag E(B - V), though some regions do deviate from SFD98 by as much as 50%. The maximum deviation found is 45 mmag in E(B - V), and spatial structure of the deviation is strongly correlated with the observed dust temperature, such that SFD98 underpredict reddening in regions of low dust temperature. Our maps of these deviations, as well as their errors, are made available to the scientific community on the Web as a supplemental correction to SFD98.

  19. 48 CFR 1401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 1401.403 Section 1401.403 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL DEPARTMENT OF THE INTERIOR ACQUISITION REGULATION SYSTEM Deviations from the FAR and DIAR 1401.403 Individual...

  20. Developments in the Control Loops Benchmarking

    OpenAIRE

    Bialic, Grzegorz; B??achuta, Marian B??

    2008-01-01

    In the chapter some developments in the control performance assessment are provided. The solution based on quadratic performance criteria which taking control effort into account was proposed in return for popular MV measure. This further broke about the definition of trade-off curve using standard deviation of both control and error signals. The standard deviation parameter is preferred because better than variance characterize the signal

  1. TERMINOLOGY MANAGEMENT FRAMEWORK DEVIATIONS IN PROJECTS

    Directory of Open Access Journals (Sweden)

    Олена Борисівна ДАНЧЕНКО

    2015-05-01

    Full Text Available The article reviews new approaches to managing projects deviations (risks, changes, problems. By offering integrated control these parameters of the project and by analogy with medical terminological systems building a new system for managing terminological variations in the projects. With an improved method of triads system definitions are analyzed medical terms that make up terminological basis. Using the method of analogy proposed new definitions for managing deviations in projects. By using triad integrity built a new system triad in project management, which will subsequently also analogous to develop a new methodology of deviations in projects.

  2. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    CERN Document Server

    Meyer, Arnd

    2009-01-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  3. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Meyer, Arnd

    2010-01-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  4. Detection of severe storm signatures in loblolly pine using seven-year periodic standardized averages and standard deviations

    Science.gov (United States)

    Stevenson Douglas; Thomas Hennessey; Thomas Lynch; Giulia Caterina; Rodolfo Mota; Robert Heineman; Randal Holeman; Dennis Wilson; Keith Anderson

    2016-01-01

    A loblolly pine plantation near Eagletown, Oklahoma was used to test standardized tree ring widths in detecting snow and ice storms. Widths of two rings immediately following suspected storms were standardized against widths of seven rings following the storm (Stan1 and Stan2). Values of Stan1 less than -0.900 predict a severe (usually ice) storm when Stan 2 is less...

  5. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  6. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    International Nuclear Information System (INIS)

    Song Ningfang; Yuan Rui; Jin Jing

    2011-01-01

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 0 /h 2 , K = 1.1714exp-3 0 /h 1.5 , B = 1.3185exp-3 0 /h, N = 5.982exp-4 0 /h 0.5 and Q = 5.197exp-7 0 in real time, and tracks degradation of gyro performance from initail values, R = 0.651 0 /h 2 , K = 0.801 0 /h 1.5 , B = 0.385 0 /h, N = 0.0874 0 /h 0.5 and Q = 8.085exp-5 0 , to final estimations, R = 9.548 0 /h 2 , K = 9.524 0 /h 1.5 , B = 2.234 0 /h, N = 0.5594 0 /h 0.5 and Q = 5.113exp-4 0 , due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  7. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  8. 41 CFR 115-1.110 - Deviations.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviations. 115-1.110 Section 115-1.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) ENVIRONMENTAL PROTECTION AGENCY 1-INTRODUCTION 1.1-Regulation System § 115-1.110 Deviations...

  9. A comparison of 3-D computed tomography versus 2-D radiography measurements of ulnar variance and ulnolunate distance during forearm rotation.

    Science.gov (United States)

    Kawanishi, Y; Moritomo, H; Omori, S; Kataoka, T; Murase, T; Sugamoto, K

    2014-06-01

    Positive ulnar variance is associated with ulnar impaction syndrome and ulnar variance is reported to increase with pronation. However, radiographic measurement can be affected markedly by the incident angle of the X-ray beam. We performed three-dimensional (3-D) computed tomography measurements of ulnar variance and ulnolunate distance during forearm rotation and compared these with plain radiographic measurements in 15 healthy wrists. From supination to pronation, ulnar variance increased in all cases on the radiographs; mean ulnar variance increased significantly and mean ulnolunate distance decreased significantly. However on 3-D imaging, ulna variance decreased in 12 cases on moving into pronation and increased in three cases; neither the mean ulnar variance nor mean ulnolunate distance changed significantly. Our results suggest that the forearm position in which ulnar variance increased varies among individuals. This may explain why some patients with ulnar impaction syndrome complain of wrist pain exacerbated by forearm supination. It also suggests that standard radiographic assessments of ulnar variance are unreliable. © The Author(s) 2013.

  10. 40 CFR 60.3052 - What else must I report if I have a deviation from the operating limits or the emission limitations?

    Science.gov (United States)

    2010-07-01

    ... control device was bypassed, or if a performance test was conducted that showed a deviation from any... deviation from the operating limits or the emission limitations? 60.3052 Section 60.3052 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR...

  11. 40 CFR 60.2957 - What else must I report if I have a deviation from the operating limits or the emission limitations?

    Science.gov (United States)

    2010-07-01

    ..., or if a performance test was conducted that showed a deviation from any emission limitation. (b) The... deviation from the operating limits or the emission limitations? 60.2957 Section 60.2957 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR...

  12. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  13. Adaptive behaviors of experts in following standard protocol in trauma management: implications for developing flexible guidelines.

    Science.gov (United States)

    Vankipuram, Mithra; Ghaemmaghami, Vafa; Patel, Vimla L

    2012-01-01

    Critical care environments are complex and dynamic. To adapt to such environments, clinicians may be required to make alterations to their workflows resulting in deviations from standard procedures. In this work, deviations from standards in trauma critical care are studied. Thirty trauma cases were observed in a Level 1 trauma center. Activities tracked were compared to the Advance Trauma Life Support standard to determine (i) if deviations had occurred, (ii) type of deviations and (iii) whether deviations were initiated by individuals or collaboratively by the team. Results show that expert clinicians deviated to innovate, while deviations of novices result mostly in error. Experts' well developed knowledge allows for flexibility and adaptiveness in dealing with standards, resulting in innovative deviations while minimizing errors made. Providing informatics solution, in such a setting, would mean that standard protocols would have be flexible enough to "learn" from new knowledge, yet provide strong support for the trainees.

  14. 41 CFR 105-1.110 - Deviation.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviation. 105-1.110 Section 105-1.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION 1-INTRODUCTION 1.1-Regulations System § 105-1.110 Deviation. (a...

  15. 41 CFR 101-1.110 - Deviation.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Deviation. 101-1.110 Section 101-1.110 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS GENERAL 1-INTRODUCTION 1.1-Regulation System § 101-1.110 Deviation...

  16. Some clarifications about the Bohmian geodesic deviation equation and Raychaudhuri's equation

    OpenAIRE

    Rahmani, Faramarz; Golshani, Mehdi

    2017-01-01

    One of the important and famous topics in general theory of relativity and gravitation is the problem of geodesic deviation and its related singularity theorems. An interesting subject is the investigation of these concepts when quantum effects are considered. Since, the definition of trajectory is not possible in the framework of standard quantum mechanics (SQM), we investigate the problem of geodesic equation and its related topics in the framework of Bohmian quantum mechanics in which the ...

  17. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  18. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    Science.gov (United States)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  19. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  20. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    Science.gov (United States)

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally

  1. Test of the nonexponential deviations from decay curve of 52V using continuous kinetic function method

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son

    1995-01-01

    Present work is aimed at a formulation of an experimental approach to search the proposed nonexponential deviations from decay curve and at description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. A continuous kinetic function (CKF) method is described for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behaviour of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is researched. A complex type of decay is discussed. (authors). 10 refs., 4 figs., 2 tabs

  2. Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.

    Science.gov (United States)

    Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong

    2018-03-01

    Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.

  3. 40 CFR 60.2225 - What else must I report if I have a deviation from the requirement to have a qualified operator...

    Science.gov (United States)

    2010-07-01

    ... deviation from the requirement to have a qualified operator accessible? 60.2225 Section 60.2225 Protection... PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Commercial and Industrial Solid Waste... report if I have a deviation from the requirement to have a qualified operator accessible? (a) If all...

  4. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  5. Transport Coefficients from Large Deviation Functions

    Directory of Open Access Journals (Sweden)

    Chloe Ya Gao

    2017-10-01

    Full Text Available We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green–Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  6. Transport Coefficients from Large Deviation Functions

    Science.gov (United States)

    Gao, Chloe; Limmer, David

    2017-10-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  7. Methods for determining the effect of flatness deviations, eccentricity and pyramidal errors on angle measurements

    CSIR Research Space (South Africa)

    Kruger, OA

    2000-01-01

    Full Text Available on face-to-face angle measurements. The results show that flatness and eccentricity deviations have less effect on angle measurements than do pyramidal errors. 1. Introduction Polygons and angle blocks are the most important transfer standards in the field... of angle metrology. Polygons are used by national metrology institutes (NMIs) as transfer standards to industry, where they are used in conjunction with autocollimators to calibrate index tables, rotary tables and other forms of angle- measuring equipment...

  8. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  9. Recalibration of the 226Ra emanation analysis system

    International Nuclear Information System (INIS)

    Lucas, H.F. Jr.; Markun, F.

    1982-01-01

    The 226 Ra emanation system was found to require recalibration. The gain of the various counting systems was established to about +-0.5%. The variance introduced into the analysis by multiple counting systems was low and corresponded to a fractional standard deviation of +-0.5%. The variance introduced into the analysis by both multiple counting systems and multiple counting chambers needs to be redetermined but is less than a fractional standard deviation of +-2%. The newly established calibration factor of 5.66 cpm/pg 226 Ra is about 6% greater than that used previously. The leakage of radon into the greased fittings of the emanation flask which was indicated in an earlier study was not confirmed

  10. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  11. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  12. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    International Nuclear Information System (INIS)

    Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.

    2007-01-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool

  13. 41 CFR 109-1.110-50 - Deviation procedures.

    Science.gov (United States)

    2010-07-01

    ... best interest of the Government; (3) If applicable, the name of the contractor and identification of... background information which will contribute to a full understanding of the desired deviation. (b)(1... authorized to grant deviations to the DOE-PMR. (d) Requests for deviations from the FPMR will be coordinated...

  14. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  15. Total focusing method (TFM) robustness to material deviations

    Science.gov (United States)

    Painchaud-April, Guillaume; Badeau, Nicolas; Lepage, Benoit

    2018-04-01

    The total focusing method (TFM) is becoming an accepted nondestructive evaluation method for industrial inspection. What was a topic of discussion in the applied research community just a few years ago is now being deployed in critical industrial applications, such as inspecting welds in pipelines. However, the method's sensitivity to unexpected parametric changes (material and geometric) has not been rigorously assessed. In this article, we investigate the robustness of TFM in relation to unavoidable deviations from modeled nominal inspection component characteristics, such as sound velocities and uncertainties about the parts' internal and external diameters. We also review TFM's impact on the standard inspection modes often encountered in industrial inspections, and we present a theoretical model supported by empirical observations to illustrate the discussion.

  16. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  17. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  18. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  19. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    Science.gov (United States)

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  20. The large deviation approach to statistical mechanics

    International Nuclear Information System (INIS)

    Touchette, Hugo

    2009-01-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein's theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  1. The large deviation approach to statistical mechanics

    Science.gov (United States)

    Touchette, Hugo

    2009-07-01

    The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein’s theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.

  2. Transport Coefficients from Large Deviation Functions

    OpenAIRE

    Gao, Chloe Ya; Limmer, David T.

    2017-01-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate th...

  3. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  4. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  5. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  6. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  7. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  8. Autonomous estimation of Allan variance coefficients of onboard fiber optic gyro

    Energy Technology Data Exchange (ETDEWEB)

    Song Ningfang; Yuan Rui; Jin Jing, E-mail: rayleing@139.com [School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191 (China)

    2011-09-15

    Satellite motion included in gyro output disturbs the estimation of Allan variance coefficients of fiber optic gyro on board. Moreover, as a standard method for noise analysis of fiber optic gyro, Allan variance has too large offline computational effort and data storages to be applied to online estimation. In addition, with the development of deep space exploration, it is urged that satellite requires more autonomy including autonomous fault diagnosis and reconfiguration. To overcome the barriers and meet satellite autonomy, we present a new autonomous method for estimation of Allan variance coefficients including rate ramp, rate random walk, bias instability, angular random walk and quantization noise coefficients. In the method, we calculate differences between angle increments of star sensor and gyro to remove satellite motion from gyro output, and propose a state-space model using nonlinear adaptive filter technique for quantities previously measured from offline data techniques such as the Allan variance method. Simulations show the method correctly estimates Allan variance coefficients, R = 2.7965exp-4 {sup 0}/h{sup 2}, K = 1.1714exp-3 {sup 0}/h{sup 1.5}, B = 1.3185exp-3 {sup 0}/h, N = 5.982exp-4 {sup 0}/h{sup 0.5} and Q = 5.197exp-7 {sup 0} in real time, and tracks degradation of gyro performance from initail values, R = 0.651 {sup 0}/h{sup 2}, K = 0.801 {sup 0}/h{sup 1.5}, B = 0.385 {sup 0}/h, N = 0.0874 {sup 0}/h{sup 0.5} and Q = 8.085exp-5 {sup 0}, to final estimations, R = 9.548 {sup 0}/h{sup 2}, K = 9.524 {sup 0}/h{sup 1.5}, B = 2.234 {sup 0}/h, N = 0.5594 {sup 0}/h{sup 0.5} and Q = 5.113exp-4 {sup 0}, due to gamma radiation in space. The technique proposed here effectively isolates satellite motion, and requires no data storage and any supports from the ground.

  9. Severe obesity is a limitation for the use of body mass index standard deviation scores in children and adolescents.

    Science.gov (United States)

    Júlíusson, Pétur B; Roelants, Mathieu; Benestad, Beate; Lekhal, Samira; Danielsen, Yngvild; Hjelmesaeth, Jøran; Hertel, Jens K

    2018-02-01

    We analysed the distribution of the body mass index standard deviation scores (BMI-SDS) in children and adolescents seeking treatment for severe obesity, according to the International Obesity Task Force (IOTF), World Health Organization (WHO) and the national Norwegian Bergen Growth Study (BGS) BMI reference charts and the percentage above the International Obesity Task Force 25 cut-off (IOTF-25). This was a cross-sectional study of 396 children aged four to 17 years, who attended a tertiary care obesity centre in Norway from 2009 to 2015. Their BMI was converted to SDS using the three growth references and expressed as the percentage above IOTF-25. The percentage of body fat was assessed by bioelectrical impedance analysis. Regardless of which BMI reference chart was used, the BMI-SDS was significantly different between the age groups, with a wider range of higher values up to 10 years of age and a more narrow range of lower values thereafter. The distributions of the percentage above IOTF-25 and percentage of body fat were more consistent across age groups. Our findings suggest that it may be more appropriate to use the percentage above a particular BMI cut-off, such as the percentage above IOTF-25, than the IOTF, WHO and BGS BMI-SDS in paediatric patients with severe obesity. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  10. a Web-Based Framework for Visualizing Industrial Spatiotemporal Distribution Using Standard Deviational Ellipse and Shifting Routes of Gravity Centers

    Science.gov (United States)

    Song, Y.; Gui, Z.; Wu, H.; Wei, Y.

    2017-09-01

    Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise) to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.

  11. A WEB-BASED FRAMEWORK FOR VISUALIZING INDUSTRIAL SPATIOTEMPORAL DISTRIBUTION USING STANDARD DEVIATIONAL ELLIPSE AND SHIFTING ROUTES OF GRAVITY CENTERS

    Directory of Open Access Journals (Sweden)

    Y. Song

    2017-09-01

    Full Text Available Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.

  12. Mortality and morbidity risks vary with birth weight standard deviation score in growth restricted extremely preterm infants.

    Science.gov (United States)

    Yamakawa, Takuji; Itabashi, Kazuo; Kusuda, Satoshi

    2016-01-01

    To assess whether the mortality and morbidity risks vary with birth weight standard deviation score (BWSDS) in growth restricted extremely preterm infants. This was a multicenter retrospective cohort study using the database of the Neonatal Research Network of Japan and including 9149 infants born between 2003 and 2010 at <28 weeks gestation. According to the BWSDSs, the infants were classified as: <-2.0, -2.0 to -1.5, -1.5 to -1.0, -1.0 to -0.5, and ≥-0.5. Infants with BWSDS≥-0.5 were defined as non-growth restricted group. After adjusting for covariates, the risks of mortality and some morbidities were different among the BWSDS groups. Compared with non-growth restricted group, the adjusted odds ratio (aOR) for mortality [aOR, 1.69; 95% confidence interval (CI), 1.35-2.12] and chronic lung disease (CLD) (aOR, 1.28; 95% CI, 1.07-1.54) were higher among the infants with BWSDS -1.5 to <-1.0. The aOR for severe retinopathy of prematurity (ROP) (aOR, 1.36; 95% CI, 1.09-1.71) and sepsis (aOR, 1.72; 95% CI, 1.32-2.24) were higher among the infants with BWSDS -2.0 to <-1.5. The aOR for necrotizing enterocolitis (NEC) (aOR, 2.41; 95% CI, 1.64-3.55) was increased at a BWSDS<-2.0. Being growth restricted extremely preterm infants confer additional risks for mortality and morbidities such as CLD, ROP, sepsis and NEC, and these risks may vary with BWSDS. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Nutrient Contents and Sensory Quality Assessment of Home ...

    African Journals Online (AJOL)

    Fresh cow milk and home-prepared cheese and yogurt were analyzed chemically using standard methods of AOAC, Atomic Absorption Spectrometry and Spectrophotometry. Data obtained were subjected to statistical analysis: means and standard deviation, analysis of variance (ANOVA) and means separated using ...

  14. 2018-04-29T21:06:34Z https://www.ajol.info/index.php/all/oai oai:ojs ...

    African Journals Online (AJOL)

    Fresh cow milk and home-prepared cheese and yogurt were analyzed chemically using standard methods of AOAC, Atomic Absorption Spectrometry and Spectrophotometry. Data obtained were subjected to statistical analysis: means and standard deviation, analysis of variance (ANOVA) and means separated using ...

  15. Simplified propagation of standard uncertainties

    International Nuclear Information System (INIS)

    Shull, A.H.

    1997-01-01

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper

  16. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  17. Towards a large deviation theory for strongly correlated systems

    International Nuclear Information System (INIS)

    Ruiz, Guiomar; Tsallis, Constantino

    2012-01-01

    A large-deviation connection of statistical mechanics is provided by N independent binary variables, the (N→∞) limit yielding Gaussian distributions. The probability of n≠N/2 out of N throws is governed by e −Nr , r related to the entropy. Large deviations for a strong correlated model characterized by indices (Q,γ) are studied, the (N→∞) limit yielding Q-Gaussians (Q→1 recovers a Gaussian). Its large deviations are governed by e q −Nr q (∝1/N 1/(q−1) , q>1), q=(Q−1)/(γ[3−Q])+1. This illustration opens the door towards a large-deviation foundation of nonextensive statistical mechanics. -- Highlights: ► We introduce the formalism of relative entropy for a single random binary variable and its q-generalization. ► We study a model of N strongly correlated binary random variables and their large-deviation probabilities. ► Large-deviation probability of strongly correlated model exhibits a q-exponential decay whose argument is proportional to N, as extensivity requires. ► Our results point to a q-generalized large deviation theory and suggest a large-deviation foundation of nonextensive statistical mechanics.

  18. 49 CFR 192.913 - When may an operator deviate its program from certain requirements of this subpart?

    Science.gov (United States)

    2010-10-01

    ... Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.913 When may an operator deviate its program...

  19. USL/DBMS NASA/PC R and D project C programming standards

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1984-01-01

    A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.

  20. The large deviations theorem and ergodicity

    International Nuclear Information System (INIS)

    Gu Rongbao

    2007-01-01

    In this paper, some relationships between stochastic and topological properties of dynamical systems are studied. For a continuous map f from a compact metric space X into itself, we show that if f satisfies the large deviations theorem then it is topologically ergodic. Moreover, we introduce the topologically strong ergodicity, and prove that if f is a topologically strongly ergodic map satisfying the large deviations theorem then it is sensitively dependent on initial conditions

  1. Large deviations

    CERN Document Server

    Deuschel, Jean-Dominique; Deuschel, Jean-Dominique

    2001-01-01

    This is the second printing of the book first published in 1988. The first four chapters of the volume are based on lectures given by Stroock at MIT in 1987. They form an introduction to the basic ideas of the theory of large deviations and make a suitable package on which to base a semester-length course for advanced graduate students with a strong background in analysis and some probability theory. A large selection of exercises presents important material and many applications. The last two chapters present various non-uniform results (Chapter 5) and outline the analytic approach that allow

  2. PoDMan: Policy Deviation Management

    Directory of Open Access Journals (Sweden)

    Aishwarya Bakshi

    2017-07-01

    Full Text Available Whenever an unexpected or exceptional situation occurs, complying with the existing policies may not be possible. The main objective of this work is to assist individuals and organizations to decide in the process of deviating from policies and performing a non-complying action. The paper proposes utilizing software agents as supportive tools to provide the best non-complying action while deviating from policies. The article also introduces a process in which the decision on the choice of non-complying action can be made. The work is motivated by a real scenario observed in a hospital in Norway and demonstrated through the same settings.

  3. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  4. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  5. Evaluating deviations in prostatectomy patients treated with IMRT.

    Science.gov (United States)

    Sá, Ana Cravo; Peres, Ana; Pereira, Mónica; Coelho, Carina Marques; Monsanto, Fátima; Macedo, Ana; Lamas, Adrian

    2016-01-01

    To evaluate the deviations in prostatectomy patients treated with IMRT in order to calculate appropriate margins to create the PTV. Defining inappropriate margins can lead to underdosing in target volumes and also overdosing in healthy tissues, increasing morbidity. 223 CBCT images used for alignment with the CT planning scan based on bony anatomy were analyzed in 12 patients treated with IMRT following prostatectomy. Shifts of CBCT images were recorded in three directions to calculate the required margin to create PTV. The mean and standard deviation (SD) values in millimetres were -0.05 ± 1.35 in the LR direction, -0.03 ± 0.65 in the SI direction and -0.02 ± 2.05 the AP direction. The systematic error measured in the LR, SI and AP direction were 1.35 mm, 0.65 mm, and 2.05 mm with a random error of 2.07 mm; 1.45 mm and 3.16 mm, resulting in a PTV margin of 4.82 mm; 2.64 mm, and 7.33 mm, respectively. With IGRT we suggest a margin of 5 mm, 3 mm and 8 mm in the LR, SI and AP direction, respectively, to PTV1 and PTV2. Therefore, this study supports an anisotropic margin expansion to the PTV being the largest expansion in the AP direction and lower in SI.

  6. Prevalence of postural deviations and associated factors in children and adolescents: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Mariana Vieira Batistão

    Full Text Available Abstract Introduction: Postural deviations are frequent in childhood and may cause pain and functional impairment. Previously, only a few studies have examined the association between body posture and intrinsic and extrinsic factors. Objective: To assess the prevalence of postural changes in school children, and to determine, using multiple logistic regression analysis, whether factors such as age, gender, BMI, handedness and physical activity might explain these deviations. Methods: The posture of 288 students was assessed by observation. Subjects were aged between 6 and 15 years, 59.4% (n = 171 of which were female. The mean age was 10.6 (± 2.4 years. Mean body weight was 38.6 (± 12.7 kg and mean height was 1.5 (± 0.1 m. A digital scale, a tapeline, a plumb line and standardized forms were used to collect data. The data were analyzed descriptively using the chi-square test and logistic regression analysis (significance level of 5%. Results: We found the following deviations to be prevalent among schoolchildren: forward head posture, 53.5%, shoulder elevation, 74.3%, asymmetry of the iliac crests, 51.7%, valgus knees, 43.1%, thoracic hyperkyphosis, 30.2%, lumbar hyperlordosis, 37.2% and winged shoulder blades, 66.3%. The associated factors were age, gender, BMI and physical activity. Discussion: There was a high prevalence of postural deviations and the intrinsic and extrinsic factors partially explain the postural deviations. Conclusion: These findings contribute to the understanding of how and why these deviations develop, and to the implementation of preventive and rehabilitation programs, given that some of the associated factors are modifiable.

  7. A study on the deviation aspects of the poem “The Eightieth Stage”

    Directory of Open Access Journals (Sweden)

    Soghra Salmaninejad Mehrabadi

    2016-02-01

    's innovation. New expressions are also used in other parts of abnormality in “The Eightieth Stag e” . Stylistic deviation Sometimes, Akhavan uses local and slang words, and words with different songs and music produces deviation as well. This Application is one kind of abnormality. Words such as “han, hey, by the truth, pity, hoome, kope, meydanak and ...” are of this type of abnormality .   Ancient deviation One way to break out of the habit of poetry , is attention to ancient words and actions . Archaism is one of the factors affecting the deviation. Archaism deviation helps to make the old sp. According to Leach, the ancient is the survival of the old language in the now. Syntactic factors, type of music and words, are effective in escape from the standard language. ”Sowrat (sharpness, hamgenan (counterparts, parine (last year, pour ( son, pahlaw (champion’’are Words that show Akhavan’s attention to archaism. The ancient pronunciation is another part of his work. Furthermore, use of mythology and allusion have created deviation of this type. Cases such as anagram adjectival compounds, the use of two prepositions for a word, the use of the adjective and noun in the plural form, are signs of archaism in grammar and syntax. He is interested in grammatical elements of Khorasani Style. Most elements of this style used in “The Eightieth Stage” poetry. S emantic deviation Semantic deviation is caused by the imagery . The poet uses frequently literary figures. By this way, he produces new meaning and therefore highlights his poem. Simile, metaphor, personification and irony are the most important examples of this deviation. Apparently the maximum deviation from the norm in this poem is of periodic deviation (ancient or archaism. The second row belongs to the semantic deviation in which metaphor is the most meaningful. The effect of metaphor in this poem is quite well. In general, Poet’s notice to the different deviations is one of his techniques and the key

  8. Two examples of non strictly convex large deviations

    OpenAIRE

    De Marco, Stefano; Jacquier, Antoine; Roome, Patrick

    2016-01-01

    We present two examples of a large deviations principle where the rate function is not strictly convex. This is motivated by a model used in mathematical finance (the Heston model), and adds a new item to the zoology of non strictly convex large deviations. For one of these examples, we show that the rate function of the Cramer-type of large deviations coincides with that of the Freidlin-Wentzell when contraction principles are applied.

  9. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  10. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  11. Residual standard deviation: Validation of a new measure of dual-task cost in below-knee prosthesis users.

    Science.gov (United States)

    Howard, Charla L; Wallace, Chris; Abbas, James; Stokic, Dobrivoje S

    2017-01-01

    We developed and evaluated properties of a new measure of variability in stride length and cadence, termed residual standard deviation (RSD). To calculate RSD, stride length and cadence are regressed against velocity to derive the best fit line from which the variability (SD) of the distance between the actual and predicted data points is calculated. We examined construct, concurrent, and discriminative validity of RSD using dual-task paradigm in 14 below-knee prosthesis users and 13 age- and education-matched controls. Subjects walked first over an electronic walkway while performing separately a serial subtraction and backwards spelling task, and then at self-selected slow, normal, and fast speeds used to derive the best fit line for stride length and cadence against velocity. Construct validity was demonstrated by significantly greater increase in RSD during dual-task gait in prosthesis users than controls (group-by-condition interaction, stride length p=0.0006, cadence p=0.009). Concurrent validity was established against coefficient of variation (CV) by moderate-to-high correlations (r=0.50-0.87) between dual-task cost RSD and dual-task cost CV for both stride length and cadence in prosthesis users and controls. Discriminative validity was documented by the ability of dual-task cost calculated from RSD to effectively differentiate prosthesis users from controls (area under the receiver operating characteristic curve, stride length 0.863, p=0.001, cadence 0.808, p=0.007), which was better than the ability of dual-task cost CV (0.692, 0.648, respectively, not significant). These results validate RSD as a new measure of variability in below-knee prosthesis users. Future studies should include larger cohorts and other populations to ascertain its generalizability. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  13. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  14. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  15. Sampling Variances and Covariances of Parameter Estimates in Item Response Theory.

    Science.gov (United States)

    1982-08-01

    substituting (15) into (16) and solving for k and K k = b b1 - o K , (17)k where b and b are means for m and r items, respectively. To find the variance...C5 , and C12 were treated as known. We find that the standard errors of B1 to B5 are increased drastically by ignorance of C 1 to C5 ; all...ERIC Facilltv-Acquisitlons Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC 27514 Bethesda, MD 20014 -7- Dr. A. J. Eschenbrenner 1 Dr. John R

  16. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  17. Test of nonexponential deviations from decay curve of 52V using continuous kinetic function method

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son

    1993-01-01

    The present work is aimed at a formulation of an experimental approach to search the proposed description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. The continuous kinetic function (CKF) method is used for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behavior of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is research. A complex type of decay is discussed. (author). 10 refs, 2 tabs, 5 figs

  18. 21 CFR 330.11 - NDA deviations from applicable monograph.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 5 2010-04-01 2010-04-01 false NDA deviations from applicable monograph. 330.11... EFFECTIVE AND NOT MISBRANDED Administrative Procedures § 330.11 NDA deviations from applicable monograph. A new drug application requesting approval of an OTC drug deviating in any respect from a monograph that...

  19. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  20. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  1. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  2. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  3. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  4. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  5. Assessing factors that influence deviations between measured and calculated reference evapotranspiration

    Science.gov (United States)

    Rodny, Marek; Nolz, Reinhard

    2017-04-01

    Evapotranspiration (ET) is a fundamental component of the hydrological cycle, but challenging to be quantified. Lysimeter facilities, for example, can be installed and operated to determine ET, but they are costly and represent only point measurements. Therefore, lysimeter data are traditionally used to develop, calibrate, and validate models that allow calculating reference evapotranspiration (ET0) based on meteorological data, which can be measured more easily. The standardized form of the well-known FAO Penman-Monteith equation (ASCE-EWRI) is recommended as a standard procedure for estimating ET0 and subsequently plant water requirements. Applied and validated under different climatic conditions, the Penman-Monteith equation is generally known to deliver proper results. On the other hand, several studies documented deviations between measured and calculated ET0 depending on environmental conditions. Potential reasons are, for example, differing or varying surface characteristics of the lysimeter and the location where the weather instruments are placed. Advection of sensible heat (transport of dry and hot air from surrounding areas) might be another reason for deviating ET-values. However, elaborating causal processes is complex and requires comprehensive data of high quality and specific analysis techniques. In order to assess influencing factors, we correlated differences between measured and calculated ET0 with pre-selected meteorological parameters and related system parameters. Basic data were hourly ET0-values from a weighing lysimeter (ET0_lys) with a surface area of 2.85 m2 (reference crop: frequently irrigated grass), weather data (air and soil temperature, relative humidity, air pressure, wind velocity, and solar radiation), and soil water content in different depths. ET0_ref was calculated in hourly time steps according to the standardized procedure after ASCE-EWRI (2005). Deviations between both datasets were calculated as ET0_lys-ET0_ref and

  6. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  7. Perception of midline deviations in smile esthetics by laypersons.

    Science.gov (United States)

    Ferreira, Jamille Barros; Silva, Licínio Esmeraldo da; Caetano, Márcia Tereza de Oliveira; Motta, Andrea Fonseca Jardim da; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson

    2016-01-01

    To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student's t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.

  8. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  9. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  10. Association between septal deviation and sinonasal papilloma.

    Science.gov (United States)

    Nomura, Kazuhiro; Ogawa, Takenori; Sugawara, Mitsuru; Honkura, Yohei; Oshima, Hidetoshi; Arakawa, Kazuya; Oshima, Takeshi; Katori, Yukio

    2013-12-01

    Sinonasal papilloma is a common benign epithelial tumor of the sinonasal tract and accounts for 0.5% to 4% of all nasal tumors. The etiology of sinonasal papilloma remains unclear, although human papilloma virus has been proposed as a major risk factor. Other etiological factors, such as anatomical variations of the nasal cavity, may be related to the pathogenesis of sinonasal papilloma, because deviated nasal septum is seen in patients with chronic rhinosinusitis. We, therefore, investigated the involvement of deviated nasal septum in the development of sinonasal papilloma. Preoperative computed tomography or magnetic resonance imaging findings of 83 patients with sinonasal papilloma were evaluated retrospectively. The side of papilloma and the direction of septal deviation showed a significant correlation. Septum deviated to the intact side in 51 of 83 patients (61.4%) and to the affected side in 18 of 83 patients (21.7%). Straight or S-shaped septum was observed in 14 of 83 patients (16.9%). Even after excluding 27 patients who underwent revision surgery and 15 patients in whom the papilloma touched the concave portion of the nasal septum, the concave side of septal deviation was associated with the development of sinonasal papilloma (p = 0.040). The high incidence of sinonasal papilloma in the concave side may reflect the consequences of the traumatic effects caused by wall shear stress of the high-velocity airflow and the increased chance of inhaling viruses and pollutants. The present study supports the causative role of human papilloma virus and toxic chemicals in the occurrence of sinonasal papilloma.

  11. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  12. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  13. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  14. Effect of multizone refractive multifocal contact lenses on standard automated perimetry.

    Science.gov (United States)

    Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa

    2012-09-01

    The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.

  15. Ecuaciones de recurrencia estocásticas en el cálculo de la prima de reaseguro Finite Risk.

    Directory of Open Access Journals (Sweden)

    Pons Cardell, Mª Angels

    2013-06-01

    Full Text Available The aim of this paper is to calculate the renewal premium of finite risk reinsurance under the assumption that the interest rate shows a stochastic evolution. The problem of the convolution of the random variables involved in the calculation of the premium has been solved by simulating claim paths using the Monte-Carlo method and applying three financial decision criteria: the expected value, the variance and the standard deviation. In the last two criteria we propose to use a stochastic recurrence equation to avoid the problem of dependence between stochastic capitalization factors. The application of the variance criterion and of the standard deviation criterion has allowed us to obtain the reinsurance premium depending on the level of risk aversion of the reinsurer and the volatility of the interest rate

  16. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Capacity limitations to extract the mean emotion from multiple facial expressions depend on emotion variance.

    Science.gov (United States)

    Ji, Luyan; Pourtois, Gilles

    2018-04-20

    We examined the processing capacity and the role of emotion variance in ensemble representation for multiple facial expressions shown concurrently. A standard set size manipulation was used, whereby the sets consisted of 4, 8, or 16 morphed faces each uniquely varying along a happy-angry continuum (Experiment 1) or a neutral-happy/angry continuum (Experiments 2 & 3). Across the three experiments, we reduced the amount of emotion variance in the sets to explore the boundaries of this process. Participants judged the perceived average emotion from each set on a continuous scale. We computed and compared objective and subjective difference scores, using the morph units and post-experiment ratings, respectively. Results of the subjective scores were more consistent than the objective ones across the first two experiments where the variance was relatively large, and revealed each time that increasing set size led to a poorer averaging ability, suggesting capacity limitations in establishing ensemble representations for multiple facial expressions. However, when the emotion variance in the sets was reduced in Experiment 3, both subjective and objective scores remained unaffected by set size, suggesting that the emotion averaging process was unlimited in these conditions. Collectively, these results suggest that extracting mean emotion from a set composed of multiple faces depends on both structural (attentional) and stimulus-related effects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. The problem of low variance voxels in statistical parametric mapping; a new hat avoids a 'haircut'.

    Science.gov (United States)

    Ridgway, Gerard R; Litvak, Vladimir; Flandin, Guillaume; Friston, Karl J; Penny, Will D

    2012-02-01

    Statistical parametric mapping (SPM) locates significant clusters based on a ratio of signal to noise (a 'contrast' of the parameters divided by its standard error) meaning that very low noise regions, for example outside the brain, can attain artefactually high statistical values. Similarly, the commonly applied preprocessing step of Gaussian spatial smoothing can shift the peak statistical significance away from the peak of the contrast and towards regions of lower variance. These problems have previously been identified in positron emission tomography (PET) (Reimold et al., 2006) and voxel-based morphometry (VBM) (Acosta-Cabronero et al., 2008), but can also appear in functional magnetic resonance imaging (fMRI) studies. Additionally, for source-reconstructed magneto- and electro-encephalography (M/EEG), the problems are particularly severe because sparsity-favouring priors constrain meaningfully large signal and variance to a small set of compactly supported regions within the brain. (Acosta-Cabronero et al., 2008) suggested adding noise to background voxels (the 'haircut'), effectively increasing their noise variance, but at the cost of contaminating neighbouring regions with the added noise once smoothed. Following theory and simulations, we propose to modify--directly and solely--the noise variance estimate, and investigate this solution on real imaging data from a range of modalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  20. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  1. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  2. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  3. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  4. Linear versus non-linear measures of temporal variability in finger tapping and their relation to performance on open- versus closed-loop motor tasks: comparing standard deviations to Lyapunov exponents.

    Science.gov (United States)

    Christman, Stephen D; Weaver, Ryan

    2008-05-01

    The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.

  5. Reduction of treatment delivery variances with a computer-controlled treatment delivery system

    International Nuclear Information System (INIS)

    Fraass, B.A.; Lash, K.L.; Matrone, G.M.; Lichter, A.S.

    1997-01-01

    Purpose: To analyze treatment delivery variances for 3-D conformal therapy performed at various levels of treatment delivery automation, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system. Materials and Methods: All external beam treatments performed in our department during six months of 1996 were analyzed to study treatment delivery variances versus treatment complexity. Treatments for 505 patients (40,641 individual treatment ports) on four treatment machines were studied. All treatment variances noted by treatment therapists or quality assurance reviews (39 in all) were analyzed. Machines 'M1' (CLinac (6(100))) and 'M2' (CLinac 1800) were operated in a standard manual setup mode, with no record and verify system (R/V). Machines 'M3' (CLinac 2100CD/MLC) and ''M4'' (MM50 racetrack microtron system with MLC) treated patients under the control of a computer-controlled conformal radiotherapy system (CCRS) which 1) downloads the treatment delivery plan from the planning system, 2) performs some (or all) of the machine set-up and treatment delivery for each field, 3) monitors treatment delivery, 4) records all treatment parameters, and 5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3, so it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments (ports), non-axial and non-coplanar plans, multi-segment intensity modulation, and pseudo-isocentric treatments (and other plans with computer-controlled table motions). Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines, so this analysis

  6. Optimization of Burr size, Surface Roughness and Circularity Deviation during Drilling of Al 6061 using Taguchi Design Method and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Reddy Sreenivasulu

    2015-03-01

    Full Text Available This paper presents the influence of cutting parameters like cutting speed, feed rate, drill diameter, point angle and clearance angle on the burr size, surface roughness and circularity deviation of Al 6061 during drilling on CNC vertical machining center. A plan of experiments based on Taguchi technique has been used to acquire the data. An orthogonal array, signal to noise (S/N ratio and analysis of variance (ANOVA are employed to investigate machining characteristics of Al 6061 using HSS twist drill bits of variable tool geometry and maintain constant helix angle of 45 degrees. Confirmation tests have been carried out to predict the optimal setting of process parameters to validate the used approach, obtained the values of 0.2618mm, 0.1821mm, 3.7451µm, 0.0676mm for burr height, burr thickness, surface roughness and circularity deviation respectively. Finally, artificial neural network has been applied to compare the predicted values with the experimental values, good agreement was shown between the predictive model results and the experimental measurements. Normal 0 false false false EN-US X-NONE X-NONE

  7. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  8. Application of Mean of Absolute Deviation Method for the Selection of Best Nonlinear Component Based on Video Encryption

    Science.gov (United States)

    Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar

    2013-07-01

    The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.

  9. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  10. Mean-Variance Efficiency of the Market Portfolio

    Directory of Open Access Journals (Sweden)

    Rafael Falcão Noda

    2014-06-01

    Full Text Available The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i the efficient frontier intersects with the market portfolio and (ii the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the adjusted parameters are not significantly different from the sample parameters, in line with the results of Levy and Roll (2010 for the USA stock market. Such results suggest that the imprecisions in the implementation of the CAPM stem mostly from parameter estimation errors and that other explanatory factors for returns may have low relevance. Therefore, our results contradict the above-mentioned criticisms to the CAPM in Brazil.

  11. Prosthodontic management of mandibular deviation using palatal ramp appliance

    Directory of Open Access Journals (Sweden)

    Prince Kumar

    2012-08-01

    Full Text Available Segmental resection of the mandible generally results in deviation of the mandible to the defective side. This loss of continuity of the mandible destroys the balance of the lower face and leads to decreased mandibular function by deviation of the residual segment toward the surgical site. Prosthetic methods advocated to reduce or eliminate mandibular deviation include intermaxillary fixation, removable mandibular guide flange, palatal ramp, implant-supported prosthesis and palatal guidance restorations which may be useful in reducing mandibular deviation and improving masticatory performance and efficiency. These methods and restorations would be combined with a well organized mandibular exercise regimen. This clinical report describes the rehabilitation following segmental mandibulectomy using palatal ramp prosthesis.

  12. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis: a proposal for standardisation

    International Nuclear Information System (INIS)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; Høilund-Carlsen, Poul Flemming

    2016-01-01

    Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertainty of repeated measurements and obtain relevant repeatability coefficients (RCs) which have a unique relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed twice by the same observer to study intra-observer agreement. In study 2, 14 patients with glioma were scanned up to five times. Resulting 49 scans were assessed by three observers to examine inter-observer agreement. Outcome variables were SUVmax in study 1 and cerebral total hemispheric glycolysis (THG) in study 2. In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter-observer differences were negligible compared to differences owing to other factors; between observer 1 and 2: −10 (95 % CI: −352 to 332) and between observer 1 vs 3: 28 (95 % CI: −313 to 370). VCA is an appealing approach for weighing different sources of variation against each other, summarised as RCs. The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components. The online version of this article (doi:10

  13. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  14. Scaphoid and lunate movement in different ranges of carpal radioulnar deviation.

    Science.gov (United States)

    Tang, Jin Bo; Xu, Jing; Xie, Ren Guo

    2011-01-01

    We aimed to investigate scaphoid and lunate movement in radial deviation and in slight and moderate ulnar deviation ranges in vivo. We obtained computed tomography scans of the right wrists from 20° radial deviation to 40° ulnar deviation in 20° increments in 6 volunteers. The 3-dimensional bony structures of the wrist, including the distal radius and ulna, were reconstructed with customized software. The changes in position of the scaphoid and lunate along flexion-extension motion (FEM), radioulnar deviation (RUD), and supination-pronation axes in 3 parts--radial deviation and slight and moderate ulnar deviation--of the carpal RUD were calculated and analyzed. During carpal RUD, scaphoid and lunate motion along 3 axes--FEM, RUD, and supination-pronation--were the greatest in the middle third of the measured RUD (from neutral position to 20° ulnar deviation) and the smallest in radial deviation. Scaphoid motion along the FEM, RUD, and supination-pronation axes in the middle third was about half that in the entire motion range. In the middle motion range, lunate movement along the FEM and RUD axes was also the greatest. During carpal RUD, the greatest scaphoid and lunate movement occurs in the middle of the arc--slight ulnar deviation--which the wrist frequently adopts to accomplish major hand actions. At radial deviation, scaphoid and lunate motion is the smallest. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  15. Estimation of heterogeneity in malaria transmission by stochastic modelling of apparent deviations from mass action kinetics

    Directory of Open Access Journals (Sweden)

    Smith Thomas A

    2008-01-01

    -heterogeneity predict lower incidence of infection at a given average exposure than do those assuming exposure to be uniform. The negative binomial model moreover provides an estimate of the variance of the within-cohort distribution of the EIR and hence of within cohort heterogeneity in exposure. Conclusion Apparent deviations from mass action kinetics in parasite transmission can arise from spatial and temporal heterogeneity in the inoculation rate, and from imprecision in its measurement. For parasites like P. falciparum, where there is no plausible biological rationale for deviations from mass action, this provides a strategy for estimating true levels of heterogeneity, since if mass-action is assumed, the within-population variance in exposure becomes identifiable in cohort studies relating infection to transmission intensity. Statistical analyses relating infection to exposure thus provide a valid general approach for estimating heterogeneity in transmission but only when they incorporate mass action kinetics and shrinkage estimates of exposure. Such analyses make it possible to include realistic levels of heterogeneity in dynamic models that predict the impact of control measures on transmission intensity.

  16. 38 CFR 36.4304 - Deviations; changes of identity.

    Science.gov (United States)

    2010-07-01

    ... identity. 36.4304 Section 36.4304 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS... Deviations; changes of identity. A deviation of more than 5 percent between the estimates upon which a... change in the identity of the property upon which the original appraisal was based, will invalidate the...

  17. Moderate deviations principles for the kernel estimator of ...

    African Journals Online (AJOL)

    Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...

  18. 48 CFR 1352.219-71 - Notification to delay performance (Deviation).

    Science.gov (United States)

    2010-10-01

    ... performance (Deviation). 1352.219-71 Section 1352.219-71 Federal Acquisition Regulations System DEPARTMENT OF....219-71 Notification to delay performance (Deviation). As prescribed in 48 CFR 1319.811-3(b), insert the following clause: Notification To Delay Performance (Deviation) (APR 2010) The contractor shall...

  19. Explorations in Statistics: Standard Deviations and Standard Errors

    Science.gov (United States)

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  20. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  1. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  2. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  3. Heterodyne Angle Deviation Interferometry in Vibration and Bubble Measurements

    OpenAIRE

    Ming-Hung Chiu; Jia-Ze Shen; Jian-Ming Huang

    2016-01-01

    We proposed heterodyne angle deviation interferometry (HADI) for angle deviation measurements. The phase shift of an angular sensor (which can be a metal film or a surface plasmon resonance (SPR) prism) is proportional to the deviation angle of the test beam. The method has been demonstrated in bubble and speaker’s vibration measurements in this paper. In the speaker’s vibration measurement, the voltage from the phase channel of a lock-in amplifier includes the vibration level and frequency. ...

  4. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  5. Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure

    Directory of Open Access Journals (Sweden)

    Robert G. MacCann

    2004-03-01

    Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.

  6. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  7. Stone heterogeneity index as the standard deviation of Hounsfield units: A novel predictor for shock-wave lithotripsy outcomes in ureter calculi.

    Science.gov (United States)

    Lee, Joo Yong; Kim, Jae Heon; Kang, Dong Hyuk; Chung, Doo Yong; Lee, Dae Hun; Do Jung, Hae; Kwon, Jong Kyou; Cho, Kang Su

    2016-04-01

    We investigated whether stone heterogeneity index (SHI), which a proxy of such variations, was defined as the standard deviation of a Hounsfield unit (HU) on non-contrast computed tomography (NCCT), can be a novel predictor for shock-wave lithotripsy (SWL) outcomes in patients with ureteral stones. Medical records were obtained from the consecutive database of 1,519 patients who underwent the first session of SWL for urinary stones between 2005 and 2013. Ultimately, 604 patients with radiopaque ureteral stones were eligible for this study. Stone related variables including stone size, mean stone density (MSD), skin-to-stone distance, and SHI were obtained on NCCT. Patients were classified into the low and high SHI groups using mean SHI and compared. One-session success rate in the high SHI group was better than in the low SHI group (74.3% vs. 63.9%, P = 0.008). Multivariate logistic regression analyses revealed that smaller stone size (OR 0.889, 95% CI: 0.841-0.937, P < 0.001), lower MSD (OR 0.995, 95% CI: 0.994-0.996, P < 0.001), and higher SHI (OR 1.011, 95% CI: 1.008-1.014, P < 0.001) were independent predictors of one-session success. The radiologic heterogeneity of urinary stones or SHI was an independent predictor for SWL success in patients with ureteral calculi and a useful clinical parameter for stone fragility.

  8. Some clarifications about the Bohmian geodesic deviation equation and Raychaudhuri’s equation

    Science.gov (United States)

    Rahmani, Faramarz; Golshani, Mehdi

    2018-01-01

    One of the important and famous topics in general theory of relativity and gravitation is the problem of geodesic deviation and its related singularity theorems. An interesting subject is the investigation of these concepts when quantum effects are considered. Since the definition of trajectory is not possible in the framework of standard quantum mechanics (SQM), we investigate the problem of geodesic equation and its related topics in the framework of Bohmian quantum mechanics in which the definition of trajectory is possible. We do this in a fixed background and we do not consider the backreaction effects of matter on the space-time metric.

  9. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  10. Improving image-quality of interference fringes of out-of-plane vibration using temporal speckle pattern interferometry and standard deviation for piezoelectric plates.

    Science.gov (United States)

    Chien-Ching Ma; Ching-Yuan Chang

    2013-07-01

    Interferometry provides a high degree of accuracy in the measurement of sub-micrometer deformations; however, the noise associated with experimental measurement undermines the integrity of interference fringes. This study proposes the use of standard deviation in the temporal domain to improve the image quality of patterns obtained from temporal speckle pattern interferometry. The proposed method combines the advantages of both mean and subtractive methods to remove background noise and ambient disturbance simultaneously, resulting in high-resolution images of excellent quality. The out-of-plane vibration of a thin piezoelectric plate is the main focus of this study, providing information useful to the development of energy harvesters. First, ten resonant states were measured using the proposed method, and both mode shape and resonant frequency were investigated. We then rebuilt the phase distribution of the first resonant mode based on the clear interference patterns obtained using the proposed method. This revealed instantaneous deformations in the dynamic characteristics of the resonant state. The proposed method also provides a frequency-sweeping function, facilitating its practical application in the precise measurement of resonant frequency. In addition, the mode shapes and resonant frequencies obtained using the proposed method were recorded and compared with results obtained using finite element method and laser Doppler vibrometery, which demonstrated close agreement.

  11. [The crooked nose: correction of dorsal and caudal septal deviations].

    Science.gov (United States)

    Foda, H M T

    2010-09-01

    The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 800 patients seeking rhinoplasty to correct external nasal deviations; 71% of these suffered from variable degrees of nasal obstruction. Septal surgery was necessary in 736 (92%) patients, not only to improve breathing, but also to achieve a straight, symmetric external nose. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the nasal dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.

  12. Electroweak interaction: Standard and beyond

    International Nuclear Information System (INIS)

    Harari, H.

    1987-02-01

    Several important topics within the standard model raise questions which are likely to be answered only by further theoretical understanding which goes beyond the standard model. In these lectures we present a discussion of some of these problems, including the quark masses and angles, the Higgs sector, neutrino masses, W and Z properties and possible deviations from a pointlike structure. 44 refs

  13. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  14. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  15. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  16. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  17. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  18. ARFI cut-off values and significance of standard deviation for liver fibrosis staging in patients with chronic liver disease.

    Science.gov (United States)

    Goertz, Ruediger S; Sturm, Joerg; Pfeifer, Lukas; Wildner, Dane; Wachter, David L; Neurath, Markus F; Strobel, Deike

    2013-01-01

    Acoustic radiation force impulse (ARFI) elastometry quantifies hepatic stiffness, and thus degree of fibrosis, non-invasively. Our aim was to analyse the diagnostic accuracy of ARFI cut-off values, and the significance of a defined limit of standard deviation (SD) as a potential quality parameter for liver fibrosis staging in patients with chronic liver diseases (CLD). 153 patients with CLD (various aetiologies) undergoing liver biopsy, and an additional 25 patients with known liver cirrhosis, were investigated. ARFI measurements were performed in the right hepatic lobe, and correlated with the histopathological Ludwig fibrosis score (inclusion criteria: at least 6 portal tracts). The diagnostic accuracy of cut-off values was analysed with respect to an SD limit of 30% of the mean ARFI value. The mean ARFI elastometry showed 1.95 ± 0.87 m/s (range 0.79-4.40) in 178 patients (80 female, 98 male, mean age: 52 years). The cut-offs were 1.25 m/s for F ≥ 2, 1.72 m/s for F ≥ 3 and 1.75 m/s for F = 4, and the corresponding AUROC 80.7%, 86.2% and 88.7%, respectively. Exclusion of 31 patients (17.4%) with an SD higher than 30% of the mean ARFI improved the diagnostic accuracy: The AUROC for F ≥ 2, F ≥ 3 and F = 4 were 86.1%, 91.2% and 91.5%, respectively. The diagnostic accuracy of ARFI can be improved by applying a maximum SD of 30% of the mean ARFI as a quality parameter--which however leads to an exclusion of a relevant number of patients. ARFI results with a high SD should be interpreted with caution.

  19. Removing an intersubject variance component in a general linear model improves multiway factoring of event-related spectral perturbations in group EEG studies.

    Science.gov (United States)

    Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C

    2013-03-01

    Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.

  20. Genetic and environmental variance and covariance parameters for some reproductive traits of Holstein and Jersey cattle in Antioquia (Colombia

    Directory of Open Access Journals (Sweden)

    Juan Carlos Zambrano

    2014-03-01

    Full Text Available The objective of this study was to estimate the genetic, phenotypic and environmental parameters for calving interval (CI, days open (DO, number of services per conception (NSC and conception rate (CR in Holstein and Jersey cattle in Antioquia (Colombia. Variance and covariance component estimates were obtained by an animal model that was solved using the derivative-free restricted maximum likelihood method. The means and standard deviations for CI, DO, NSC and CR were: 430.32±77.93 days, 127.15±76.96 days, 1.58±1.03 services per conception and 79.88±28.66% in Holstein cattle, and 409.33±86.48 days, 125.62±86.09 days, 1.48±0.98 services per conception and 84.08±27.23% in Jersey cattle, respectively. The heritability estimates (standard errors were: 0.088(0.037, 0.082(0.037, 0.040(0.025 and 0.030(0.026 in Holstein cattle and 0.072(0.098, 0.090(0.104, 0.093(0.097 and 0.147(0.117 in Jersey cattle, respectively. The results show that the genetic, phenotypic and permanent environmental correlations in the two evaluated breeds were favorable for CI × DO, CI × NSC and DO × NSC, but not for CI × CR, DO × CR and NSC × CR. Genetic and permanent environmental correlations were high in most cases in Holstein cattle, whereas in Jersey cattle they were moderate. In contrast, phenotypic correlations were very low in both breeds, except for CI × DO and NSC × CR, which were high. Overall, the genetic component found was very low (<8% in both evaluated breeds and this implies that their selection would take long time and that a good practical management of the herd will be essential in order to improve the reproductive performance.

  1. Analysis and Extension of the Percentile Method, Estimating a Noise Curve from a Single Image

    Directory of Open Access Journals (Sweden)

    Miguel Colom

    2013-12-01

    Full Text Available Given a white Gaussian noise signal on a sampling grid, its variance can be estimated from a small block sample. However, in natural images we observe the combination of the geometry of the scene being photographed and the added noise. In this case, estimating directly the standard deviation of the noise from block samples is not reliable since the measured standard deviation is not explained just by the noise but also by the geometry of the image. The Percentile method tries to estimate the standard deviation of the noise from blocks of a high-passed version of the image and a small p-percentile of these standard deviations. The idea behind is that edges and textures in a block of the image increase the observed standard deviation but they never make it decrease. Therefore, a small percentile (0.5%, for example in the list of standard deviations of the blocks is less likely to be affected by the edges and textures than a higher percentile (50%, for example. The 0.5%-percentile is empirically proven to be adequate for most natural, medical and microscopy images. The Percentile method is adapted to signal-dependent noise, which is realistic with the Poisson noise model obtained by a CCD device in a digital camera.

  2. Density, viscosity, isothermal (vapour + liquid) equilibrium, excess molar volume, viscosity deviation, and their correlations for chloroform + methyl isobutyl ketone binary system

    International Nuclear Information System (INIS)

    Clara, Rene A.; Gomez Marigliano, Ana C.; Solimo, Horacio N.

    2007-01-01

    Density and viscosity measurements for pure chloroform and methyl isobutyl ketone at T = (283.15, 293.15, 303.15, and 313.15) K as well as for the binary system {x 1 chloroform + (1 - x 1 ) methyl isobutyl ketone} at the same temperatures were made over the whole concentration range. The experimental results were fitted to empirical equations, which permit the calculation of these properties over the whole concentration and temperature ranges studied. Data of the binary mixture were further used to calculate the excess molar volume and viscosity deviation. The (vapour + liquid) equilibrium (VLE) at T = 303.15 K for this binary system was also measured in order to calculate the activity coefficients and the excess molar Gibbs energy. This binary system shows no azeotrope and negative deviations from ideal behaviour. The excess or deviation properties were fitted to the Redlich-Kister polynomial relation to obtain their coefficients and standard deviations

  3. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  4. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  5. Evaluation of Body Mass Index and Plasma Lipid Profile in Boerboel ...

    African Journals Online (AJOL)

    olayemitoyin

    Data were presented as means ± standard deviation and results were compared using analysis of variance. ... there were no significant differences (P > 0.05) in TC, TRIG, HDL and LDL between ... The Boerboel is a big, strong and intelligent.

  6. Predicted and verified deviations from Zipf's law in ecology of competing products.

    Science.gov (United States)

    Hisano, Ryohei; Sornette, Didier; Mizuno, Takayuki

    2011-08-01

    Zipf's power-law distribution is a generic empirical statistical regularity found in many complex systems. However, rather than universality with a single power-law exponent (equal to 1 for Zipf's law), there are many reported deviations that remain unexplained. A recently developed theory finds that the interplay between (i) one of the most universal ingredients, namely stochastic proportional growth, and (ii) birth and death processes, leads to a generic power-law distribution with an exponent that depends on the characteristics of each ingredient. Here, we report the first complete empirical test of the theory and its application, based on the empirical analysis of the dynamics of market shares in the product market. We estimate directly the average growth rate of market shares and its standard deviation, the birth rates and the "death" (hazard) rate of products. We find that temporal variations and product differences of the observed power-law exponents can be fully captured by the theory with no adjustable parameters. Our results can be generalized to many systems for which the statistical properties revealed by power-law exponents are directly linked to the underlying generating mechanism.

  7. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  8. Complexity analysis based on generalized deviation for financial markets

    Science.gov (United States)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  9. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  10. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  11. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  12. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  13. Deviation equation in spaces with affine connection. Pts. 3 and 4

    International Nuclear Information System (INIS)

    Iliev, B.Z.

    1987-01-01

    The concept of a parallel transport is used to define a class of displacement vectors in spaces with affine connection. The nonlocal deviation equation in such spaces is introduced using a definition of the deviation vector based on the displacement vector. It turns out to be a special of the generalized deviation equation, but having an appropriate physical interpretation. The equation of geodesic deviation is presented as an example

  14. 9 CFR 319.10 - Requirements for substitute standardized meat food products named by use of an expressed nutrient...

    Science.gov (United States)

    2010-01-01

    ... INSPECTION AND CERTIFICATION DEFINITIONS AND STANDARDS OF IDENTITY OR COMPOSITION General § 319.10... identity, but that do not comply with the established standard because of a compositional deviation that... for roller grilling”). Deviations from the ingredient provisions of the standard must be the minimum...

  15. 21 CFR 130.10 - Requirements for foods named by use of a nutrient content claim and a standardized term.

    Science.gov (United States)

    2010-04-01

    ... standardized term. (a) Description. The foods prescribed by this general definition and standard of identity... of identity but that do not comply with the standard of identity because of a deviation that is.... Deviations from noningredient provisions of the standard of identity (e.g., moisture content, food solids...

  16. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; eds.

    1998-03-01

    `MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  17. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    International Nuclear Information System (INIS)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro

    1998-03-01

    ''MCNP Use Experience'' Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year''s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile ''Guideline of Monte Carlo Calculation'' which will be a standard in the future. The appendices of this report include this ''Guideline'', the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  18. a comparison of modified and standard papanicolaou staining ...

    African Journals Online (AJOL)

    2011-07-07

    Jul 7, 2011 ... modified pap method and standard Papanicolaou method respectively. The staining characteristics in .... alcohol was replaced by 0.5 % acetic acid and also, .... was 37.1, standard deviation of 8.0 and a median of. 36.5 years.

  19. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  20. Non-specific filtering of beta-distributed data.

    Science.gov (United States)

    Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D

    2014-06-19

    Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with